我正在确定spark是否接受oracle的float(126)持有的极值。我正在将2^-126这个最小的浮点值加载到sparkDataframe中的一个双类型列中。当从Dataframe读取时,小数部分在54位之后被舍入。
请参见以下代码:
>>> df = spark.createDataFrame([(float(0.000000000000000000000000000000000000011754943508222875079687365372222456778186655567720875215087517062784172594547271728515625),)], ['flt_val']) ```
>>> df.printSchema()
root
|-- flt_val: double (nullable = true)
>>> df.select(f.format_number(f.col('flt_val'), 126), 'flt_val').show(truncate=False)
+--------------------------------------------------------------------------------------------------------------------------------+----------------------+
|format_number(flt_val, 126) |flt_val |
+--------------------------------------------------------------------------------------------------------------------------------+----------------------+
|0.000000000000000000000000000000000000011754943508222875000000000000000000000000000000000000000000000000000000000000000000000000|1.1754943508222875E-38|
+--------------------------------------------------------------------------------------------------------------------------------+----------------------+
如您所见,按原样显示数字和格式化值都会丢失后面的有效数字 11754943508222875
.
如何避免这种精度损失。
暂无答案!
目前还没有任何答案,快来回答吧!