The univariate data distribution of ( π , π ) with an increasing noisy tail: { π« π = π© ( 0 , 1 ) π« π | π = π© ( π / 2 , ( | π | + 1 ) / 2 ) We use GBM method with the quadratic loss Other experimental settings remain consistent with those described in Section 6. We present the conditional coverage and averaged prediction intervals , together with the Jaccard similarity of each method. The SGD for optimization chooses a small learning rat e π = 0 01 R esults show that ELCP still performs well as under Huber loss.