The univariate data distribution of ( 𝑋 , 𝑌 ) with an increasing noisy tail: { 𝒫 𝑋 = 𝒩 ( 0 , 1 ) 𝒫 𝑌 | 𝑋 = 𝒩 ( 𝑋 / 2 , ( | 𝑋 | + 1 ) / 2 ) We use GBM method with the quadratic loss Other experimental settings remain consistent with those described in Section 6. We present the conditional coverage and averaged prediction intervals , together with the Jaccard similarity of each method. The SGD for optimization chooses a small learning rat e 𝜂 = 0 01 R esults show that ELCP still performs well as under Huber loss.