0
SuperGirl

Wingsuit Formation Analyzing Software

Recommended Posts

Quote

I tried it on my old G4 Mac, but that didn't really work very well, but if anyone has a newer intel mac and wants to give it a try, I'd welcome the feedback.

you can get mono here http://www.go-mono.com



I downloaded and installed Mono on my Intel Mac, but I have no idea how to start the .exe from your webpage in mono. It's not really obvious. Or do you need to provide me with a different binary?

Cheers,

Costyn
Costyn van Dongen - http://www.flylikebrick.com/ - World Wide Wingsuit News

Share this post


Link to post
Share on other sites
Quote

anyway.... after installing Mono, open up a Terminal, go into the folder where the flock briefing tool is, and type:

mono .exe

and... voila! done!



Coooool.... works for me too now! :o:DB|
Costyn van Dongen - http://www.flylikebrick.com/ - World Wide Wingsuit News

Share this post


Link to post
Share on other sites
Quote

It crashes quite easily under Mono though, by using undo for example. Tom, do you want the error messages posted to the terminal or not?

Cheers,

Costyn



interesting that it even works ;) hihi...
on my G4 Macbook, it didn't even startup..

Bug can be reported in Mantis, the message from the terminal would be a good start yes, but ultimately I just need some time on an intel mac. A friend of me here in Vancouver is totally mac, so I might be able to borrow one for a couple hours..

Heading to the XP Paraclete tunnel for some 8-way training the rest of this week though, so I'll not be able to look at this until Monday.

Share this post


Link to post
Share on other sites
Quote

It crashes quite easily under Mono though, by using undo for example. Tom, do you want the error messages posted to the terminal or not?

Cheers,

Costyn



new version online, the above problem should be fixed..
and I added the tolerance settings in the exported image, not sure if I really like it though..

http://www.tomvandijck.com/flock/bin/FlockBriefing_0_8_18_beta.zip

Share this post


Link to post
Share on other sites
Quote

new version online, the above problem should be fixed..
and I added the tolerance settings in the exported image, not sure if I really like it though..

http://www.tomvandijck.com/flock/bin/FlockBriefing_0_8_18_beta.zip



Cool, works perfectly under Mac now!

The tolerance settings in the exported image is nice, I like it. But in mine it shows the full path to the image used, that's maybe unnecessary?
Costyn van Dongen - http://www.flylikebrick.com/ - World Wide Wingsuit News

Share this post


Link to post
Share on other sites
Quote

I like it. But in mine it shows the full path to the image used, that's maybe unnecessary?



your just worried its going to show to the world that you keep your wingsuit photos in

c:/porn/ponyplay/wtfomg/chicago16way.jpg

arent you?

Share this post


Link to post
Share on other sites
I write computer vision software in C++. I'm wondering if I might be able to leverage this into auto-formation recognition software.

Here's a framework I developed called "Magnetic". It's used most for multi-touch and augmented reality, but note that it can track finger tips, head position, etc. And check how I'm doing the calibration, which is a very similar grid system. Video:
http://vimeo.com/10220241

I'm thinking that you would load up the video and let it play. Every frame would be analyzed (note, I get about 60 - 100fps with this framework, so no sweat on speed). When a formation within proposed guidelines is recognized, the video pauses on the successful frame... maybe bounces out an image with the overlay.

And of course have some manual stuff going on, as well (much like the manual calibration in the video).

If nothing else, I could hand Magnetic off to any C++/openFrameworks developer out there that wants to pursue it.

Share this post


Link to post
Share on other sites
Quote

I write computer vision software in C++. I'm wondering if I might be able to leverage this into auto-formation recognition software.

Here's a framework I developed called "Magnetic". It's used most for multi-touch and augmented reality, but note that it can track finger tips, head position, etc. And check how I'm doing the calibration, which is a very similar grid system. Video:
http://vimeo.com/10220241



that looks very cool and definitely could be useful here!

Quote


I'm thinking that you would load up the video and let it play. Every frame would be analyzed (note, I get about 60 - 100fps with this framework, so no sweat on speed).




YES!!! I do recall initial conversations about something like this eventually becoming a nice (yet rather distant) goal... and thinking about how beneficial it could be... saving us the time of sorting through frames and frames of the same formation...

Regardless of what judging method is used, being able to detect and track the actual coordinates of the flyers in the formation would enable us to do so many cool things!

Quote


If nothing else, I could hand Magnetic off to any C++/openFrameworks developer out there that wants to pursue it.



I think you and Tom need to talk :)Thanks for offering to share the knowledge!

Share this post


Link to post
Share on other sites
Quote

I'm thinking that you would load up the video and let it play. Every frame would be analyzed (note, I get about 60 - 100fps with this framework, so no sweat on speed). When a formation within proposed guidelines is recognized, the video pauses on the successful frame... maybe bounces out an image with the overlay.



Yes, yes, yes!!!

I've been wanting to do something like this for a long time. I'd some some preliminary research into papers that have been written, but hadn't come across any promising articles yet.

But if you say it can be done, that would rock very hard! Autodetecting people in the formation would be very cool and useful. Thanks for posting that and helping out.

Cheers,

Costyn.
Costyn van Dongen - http://www.flylikebrick.com/ - World Wide Wingsuit News

Share this post


Link to post
Share on other sites
Yeah, I think i could analyze the "border" for an average color and use some tolerance tweaking to differentiate bodies from background. Shooting from below would be much better than above, as the sky has less variation to work out than the ground (unless you're over desert, etc).

Let's say the average background hue after applying a little blur is 0x7093A6. On each pixel, I bit shift out the RGB values and compare the pixel color to the average background. If the color of the current pixel is off by more than, say, 16, we turn it white to represent part of a flyer. If the color is inside the tolerance, we turn it black to represent the background.

With the image converted to black and white, we can look for blobs (the shapes the white parts form) as I already do in Magnetic. This will give us size, outline nodes, centroid (accurate center of blob), and peak data (head, hands, feet).

Much like the calibration in Magnetic (which corrects for lens distortion), you could undistort video from wide angle lens, correct for angle, etc, so that the centroids "flatten" out (kind of like making a flat map of Earth).

All you really need is the centroid, or center. By connecting each center to the next closest center in the "flat" image, you can create triangles and/or lines. Use these nodes to analyze the overall shape of the formation and triangulate the grid. Compare the center of each flyer to the connecting points in the grid. If the centroid is within x% of the formation size, it qualifies.

If all nodes are within this range, we can pause the video or maybe spit out an image. The frame would show the raw video, the centroids, and the grid. Out of bounds flyers would be noted. Since we have the outline data, we could actually highlight their whole body so everyone knows who screwed up. ;)



I know that's a long post, but this is what's on my mind. Based on experience with similar projects, I know for sure this process would actually work.

Share this post


Link to post
Share on other sites
This sounds very interesting indeed. Could it accommodate non standard formations such as asymmetrical and or 3D formations with people on different levels?
"It's just skydiving..additional drama is not required"
Some people dream about flying, I live my dream
SKYMONKEY PUBLISHING

Share this post


Link to post
Share on other sites
This sounds like a huge step in the direction needed.
Last summer, a few of us had talked about the software tracking on pixels vs separating backgrounds, but the pixels would need to be identified by hand. If you can run a differencing algorithm in real time, that's terrific.
One aspect we'd thought would be helpful is if there were a series of indicators on the screen. Each time the software detected a "complete/satisfactory" then the indicator would flash green and drop a marker for later reference.

Does Magnetic offer motion predicting or is it currently a static modeling software that models flux fields?

When you do the bitshift, are you comparing/inverting regional pixel values or individual pixels? Given the amount of blur, motion, and highly compressed codecs, it would require a huge amount of horsepower to process this if it's individual or small reference blocks.

Sounds like a really powerful step forward, if this can be made to happen!

Share this post


Link to post
Share on other sites

Sounds really good. You're touching on a lot of points which people have been thinking about how to solve. So yea we're all ears! :)
Do you know when you are going to release this software library? I'd sure like to play with it.

Cheers,

Costyn

Costyn van Dongen - http://www.flylikebrick.com/ - World Wide Wingsuit News

Share this post


Link to post
Share on other sites
Quote

This sounds very interesting indeed. Could it accommodate non standard formations such as asymmetrical and or 3D formations with people on different levels?

Good question. Asymmetrical, yes. Potential success/failure would be based on angle of lines/triangles between flyers -- not overall shape. Different levels... hmmm... depends.

Quote

Each time the software detected a "complete/satisfactory" then the indicator would flash green and drop a marker for later reference.

Right... I was thinking there would be an indicator in real time that a formation was complete, but a BMP would be bounced out for verification.

Quote

Does Magnetic offer motion predicting or is it currently a static modeling software that models flux fields?

I'm not sure how that would apply here, but yes. Each "blob" contains its current and previous position. You can use this difference to get angle and velocity. Magnetic itself wouldn't do the prediction, but the application built atop it could.

Quote

When you do the bitshift, are you comparing/inverting regional pixel values or individual pixels? Given the amount of blur, motion, and highly compressed codecs, it would require a huge amount of horsepower to process this if it's individual or small reference blocks.


I see what you're saying, but there's no need to get that complicated. By applying a slight blur, dilating the pixels, and getting the right (color) threshold setting, you essentially get the same result. Doesn't take much processor at all. You'd be surprised. The app always outpaces the video. For my purposes, it has to, because I generally write this for interactive pieces.

But I'm thinking we could do something even simpler than all this. A flock filmed from below against the bright sky is very, very easy to separate from the background by converting the image to grey scale and setting the threshold. Magnetic already does this.

Quote

Do you know when you are going to release this software library? I'd sure like to play with it.



Unfortunately, I'm swamped with the day job. However, I'd love to see this happen and want to help how I can. As I mentioned, if we set it up so that the video used for judging is always shot from below, looking up against the sky, then you can use Magnetic as is (well, slight tweaking to use video file instead of live video). But if it needs to work with color (ie, video is from above looking down), it will be more complicated and very time consuming.

I think the first step is to get some video to work with. If anyone has some good flock formation video shot from below, please PM me and send video my way. I'll use the video to get a project started that can detect flyers. Once I've got the CV side complete, I could hand it off to anyone who wants to write the triangle/line logic explained previously.

I'd make myself available to answer questions, but I don't foresee being able to get in and do the full development/testing. Just too slammed right now.

Any takers?

Share this post


Link to post
Share on other sites
Quote

Any takers?



Nice ;) I'm on vacation for a few days and we have something awesome popping up ;) We've been talking about doing this for a while indeed, but I've not found the time yet to look into this.

I did several eyetoy games for PS2 a couple years ago, so I'm familiar with some of the motion detection stuff, but haven't done anything with it for a while... if you have code or papers to share, I'd be happy to take a stab at it... PM me if you do...

As you said though... a test video shot from above or below would be a needed ;)

Another option I've been considering is stereo imaging. Basically if I have two pictures shot, and know the distance between the two lenses some depth calculations can be done, and considering depth between the people and the ground is pretty high it should be reasonable doable to get the outlines of the wingsuits. It would require a helmet with two cameras that are entirely in sync though..

Share this post


Link to post
Share on other sites
Quote

Another option I've been considering is stereo imaging. Basically if I have two pictures shot, and know the distance between the two lenses some depth calculations can be done, and considering depth between the people and the ground is pretty high it should be reasonable doable to get the outlines of the wingsuits. It would require a helmet with two cameras that are entirely in sync though..



Easily, easily done these days, probably more likely than being able to always have someone able to backfly beneath a formation.
With the new cams and tools aimed at stereographics, it's quite cost-effective too.

Share this post


Link to post
Share on other sites
Quote


Easily, easily done these days, probably more likely than being able to always have someone able to backfly beneath a formation.
With the new cams and tools aimed at stereographics, it's quite cost-effective too.



so ;) get me some pictures then ;) hehe...

Share this post


Link to post
Share on other sites
180 degrees optics any less preferable than 120 degree optics?
How do you want me to force the point of parallax?
Doing this in Hawaii will be easy as pie.
I have a setup now that mounts two on the front of my FTP.

Share this post


Link to post
Share on other sites
Quote

180 degrees optics any less preferable than 120 degree optics?
How do you want me to force the point of parallax?
Doing this in Hawaii will be easy as pie.
I have a setup now that mounts two on the front of my FTP.



whatever works for you... I think having a high resolution and sharp/clear stereo picture is more important than the field of view. One thing I did think about however, is that stereography really requires 'features' in the image, so making a shot from below would probably not work very well, considering the background is all blue everywhere.

that said though, if the background is all blue, I don't really need anything special to figure out the wingsuits as long as they are not the same blue ;)

I think in the end we'll end up with multiple algorithms all 'voting' on where that algorithm thinks a wingsuit is.

My experience with motion detection etc is that under varying light conditions the quality of the results differs a lot, and my opinion is that for this to work we need something that is very reliable... having it miss just one jumper is basically not an option.

Share this post


Link to post
Share on other sites
Quote

My experience with motion detection etc is that under varying light conditions the quality of the results differs a lot, and my opinion is that for this to work we need something that is very reliable... having it miss just one jumper is basically not an option.

I concur. The more I think about it, the more I realize that the only practical way to do it is jumpers against the sky.

I've done depth with "stereo" imaging, as well -- both with two cameras and the "slide" technique with one camera. It's far from reliable/accurate, and is not really intended to work on the distances we're talking about here (it's generally accurate for a few feet out). You can do some cool illusions with it, but it's not something I would trust to judge 3D formations.

You would need to the cameras to be several feet apart (further than the wingspan of any one jumper), almost perfectly parallel to the formation, to get anywhere near an accurate depth calculation.



Real easy... film against the sky, convert the image to greyscale, and let the jumpers' shadows form the blobs. Dead simple, really. No need to over-complicate things with velocity tracking, stereo imaging, etc. Doesn't really apply to calculating angles between jumpers in any given frame.


So let's start at step 1 -- get some footage and modify Magnetic to pick jumpers out of the sky. Then we can go from there.

Share this post


Link to post
Share on other sites
Quote

Quote

My experience with motion detection etc is that under varying light conditions the quality of the results differs a lot, and my opinion is that for this to work we need something that is very reliable... having it miss just one jumper is basically not an option.

I concur. The more I think about it, the more I realize that the only practical way to do it is jumpers against the sky.

I've done depth with "stereo" imaging, as well -- both with two cameras and the "slide" technique with one camera. It's far from reliable/accurate, and is not really intended to work on the distances we're talking about here (it's generally accurate for a few feet out). You can do some cool illusions with it, but it's not something I would trust to judge 3D formations.

You would need to the cameras to be several feet apart (further than the wingspan of any one jumper), almost perfectly parallel to the formation, to get anywhere near an accurate depth calculation.



Real easy... film against the sky, convert the image to greyscale, and let the jumpers' shadows form the blobs. Dead simple, really. No need to over-complicate things with velocity tracking, stereo imaging, etc. Doesn't really apply to calculating angles between jumpers in any given frame.


So let's start at step 1 -- get some footage and modify Magnetic to pick jumpers out of the sky. Then we can go from there.



How about 2 (or more) separate cameramen with recording GPS's to create a synthetic aperture.
...

The only sure way to survive a canopy collision is not to have one.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

0