[SciPy-User] dtype of LabView binary files
David
david at silveregg.co.jp
Tue Nov 9 23:35:26 EST 2010
On 11/10/2010 01:08 PM, Christoph Gohlke wrote:
>
>
> On 11/9/2010 2:34 PM, Xunchen Liu wrote:
>>
>> Hello,
>>
>> It seems there is only one web page here talking about the dtype of the
>> binary file saved from Labview:
>>
>> http://www.shocksolution.com/2008/06/25/reading-labview-binary-files-with-python/
>>
>> I followed Travis' suggestion on that page to convert one of my Labview
>> binary file using
>>
>> data=numpy.fromfile('name',dtype='>d')
>>
>> but this gives a array doubled the shape of my recorded data and also
>> the value of the data are not right.
>>
>> For example, the attached is the text file and binary file saved by Labview.
>>
>> the text file reads:
>>
>> array([-2332., -2420., -2460., ..., 1660., 1788., 1804.])
>>
>> while the binary file reads (with dtype='>d')
>>
>> array([-3.30078125, 0. , -3.30297852, ..., 0. ,
>> -2.6953125 , 0. ])
>>
>> Anyone knows what dtype I should use, or how should I build the correct
>> dtype for it?
>>
>> Thanks a lot!
>>
>> Xunchen Liu
>>
>
> Those data are big Endian, 80-bit IEEE extended-precision numbers,
> flattened to 128-bit extended-precision in the binary file. Not sure
> if/how such data can be read into numpy without bit manipulations.
It should be possible at least on 64 bits machines (where sizeof(long
double) == 16 bytes), and you may be able to do get away with it on 32
bits if you have a composite dtype with the second type used for
padding, i.e. you assume you have a array of N rows with two columns,
the first column being 12 bytes and the second one a 4 bytes type (say
int on 32 bits archs), or the other way around.
cheers,
David
More information about the SciPy-User
mailing list