Access to onboard orientation and location sensors


#1

Hi all,
According to the 4K specification, there is a plethora of internal attitude, orientation and location sensors, namely gyro, accelerometer, magnetometers (compass) and GPS. I went thru the SDK documentation and I haven’t found any mention of access method or some kind of metadata associated with images (video) that make use of these. Lot’s of impressive application can be thought using them. Any clue of how or when those may be used? Regards!


#2

I put together a doc that should help.

GPS Sensor Data.pdf (137.2 KB)


#3

Many thanks for the doc, that was what I looking for and it will be useful.
One question, one remark and a suggestion: I see that one of the output is called “fusion” and provide yaw, pitch and roll. Which sensors do you fuse together (magneto, gyro and/or accelerometer) ? also the Yaw unit are Deg/second and the two other are in degrees, any reason for that? or should we read all output as “degrees”?
About the magnetometer, is it possible that the unit should be “microTesla” or “unitless” and not degree? It can be degree if you compute the 3D “compass” values…

The suggestion may be to provide in the SDK some kind of access function to read the sensor value either on request or to be able to enable (and disable) the sensor output at a specific frequency.


#4

We are actually making a change to the Fusion to calculate the Yaw, Pitch and Roll differently in a future release. I would suggest using the output of the Magnetometer, Gyro and Accelerometer to calculate today. The Magnetometer is in microTesla, this was an older doc that I sent you. I will pass your feedback on the SDK to the team.


#5

This is great! I’d love to keep abreast of any updates to this. I am looking for a camera that creates 360 video I can use in VR and AR (augmented reality) applications, and having the telemetry of the video is absolutely critical. (I’m a professor at Georgia Tech who does AR work, and am currently at Mozilla working on webVR and webAR projects).

Right now, it’s easy to display 360 video in VR, but if I can align the video with the earth’s orientation, and know the location, then I can make it easy to overlay geo-located content on top of the video. I really just need to know the fused orientation in some well defined coordinate frame (ECEF? Local orientation on the surface of the earth?) … or, perhaps, a “wee bit” of help fusing it myself.

Would LOVE to build some demos of this to put on the web, and get some of these cameras for our students to work with to create AR/VR web demos.


#6

Further dumb question: any pointers to how one extracts metadata from a video??


#7

Ok, I just signed up to be a developer; hopefully (assuming you approve it) some of the obvious questions will be answered in the SDK.


#8

If you just signed up you will get an email with the SDK. It should come instantly. Let me know if you did not get it.


#9

Hi,

I been able to extract gps data with RaceRender since its newest update, but was still hoping to extract gps/sensor data with something like a python script. I have done this before with other gps enabled cameras but have been having problems accessing the correct Box which contains the sensor data . I have looked through a document posted here detatiling the MP4 box structure but am still at a loss. From what I can make out the boxes contained in a 360Fly MP4 are,

ftyp,
uuid,
moov,
free,
mdat,
dat,
ZZZZ

I was hoping for some help with what I might be doing wrong or if I should be searching in a sub box. I’m hoping to automate a sensor data extraction tool in python so I can run it on a bunch of videos from the command line.

Regards,
Daire


#10

We just released a new Beta Director application where you can export the CSV for the sensors now. A New Beta 360fly Director has been released version 0.9.