All Posts in “experiments”

Go With The Flow

Queens of the Stone Age is one of my favourite bands. I go watch them live whenever I get the chance. Last time I saw them, they had a big screen on the stage with some cool visuals for each song. I recorded this video with my phone during ‘Go With The Flow’:

A bunch of bidents travelling in space, flocking, going around bends and coming towards the camera.

Wikipedia: A bident is a two-pronged implement resembling a pitchfork.

A couple of weeks ago I was going through my files, watched this video again and wondered if I could replicate it in WebGL.

Path
To me it seemed like the bidents were following a path in the video, so I started by revisiting an old experiment with steering behaviors and adapted it to 3D. The path itself was generated using a formula extracted from TorusKnotGeometry. In this case a simple curve using p = 2 and q = 4.

Model
At first I created the model in code. One cylinder for the stick, then two curves, two cylinders and two cones for the top. It looked ok, but the performance was terrible on mobile. I moved to Blender and recreated it there – thanks to all the amazing people that post tutorials and videos online.

Then I adapted the code to use InstancedBufferGeometry. To my surprise, the performance on mobile was even worse. I found out that vertices are duplicated when using the .fromGeometry() method. I joined the discussion on this github issue and proposed a solution, but I think it doesn’t work for all cases. It worked on mine. Simple modifications to include indices in the generated geometry made it much faster on mobile. I started getting 60 fps on my 5 years old phone. So in case you found this post while looking for issues with .fromGeometry(), have a look at these modifications here, or check this hack that copies indices from each face of the original geometry.

Camera
I’ve been wanting to play with spite’s Storyline.js since I bumped into it online. This was the perfect opportunity. It is simple and works very well. I chose a few camera positions around the path and linked them in the storyboard using the t of the curve. With that I could be sure the bidents were always passing by where the camera was there.

Background
In the original video I think the background was just a pure red color. It looks great, but I thought it was looking too flat in the browser. The inspiration then came from the music video for ‘Go With The Flow’ – IMHO one of the best music videos ever made, hats off to Shynola.

The result is a mishmash of noise shaders found online, specially Procedural SkyBox by Passion.

Check the result here: http://codedoodl.es/_/brunoimbrizi/go-with-the-flow/

Check the result here: http://codedoodl.es/_/brunoimbrizi/go-with-the-flow/

Billie Deer

Three years ago my friend David invited me to participate on his Christmas Experiments project – an advent calendar with one code experiment a day. He gave me about two months notice so I had plenty of time, but I spent most of it just trying to have an idea. There were a bunch of false starts until one happy day when I googled ‘christmas gifs’ and found this:

The Way Christmas Makes Me Feel, by Elliot Dear

Half reindeer, half Michael Jackson. Can’t go wrong with that. My idea was to trace the silhouette of the character and then use that as a base to create visualizations in HTML Canvas. And that’s what I did. And people liked it. It was fun!


http://christmasexperiments.com/2013/03/make-me-feel/

This year David invited me to join the Christmas Experiments again, but this time I had only 3 weeks. I knew I had to start straight away. The idea had to come fast. And it did. How about a tribute to my old experiment, but this time in 3D? Wow, such brilliant, very technology, much moves.

Next, a quick feasibility check. I downloaded Blender, watched a few tutorials on modeling and rigging, found a couple of generic male models, a couple of reindeer heads and, most importantly, I found this mocap:


TurboSquid: Dance MJ Loop 03 – Pelvic Thrust

OK so all I had to do was to throw all those ingredients in a pan and start cooking. I thought I would have a dancing model in a Three.js scene in a couple of days and then I could go crazy on the shaders to make some cool visualizations.

I was wrong.

It started well. I learned the basics of modelling in Blender and was able to chop this guy’s head off and replace it with this deer head. He was looking cool. I called him mandeer.

I learned how to rig (following mainly these videos) and started testing some free mocap using the Makewalk plugin for Blender.

Around this time I showed the prototype to Damien at work and he got interested. We discussed a few ideas for the sound and he was keen to work on it. From that point onward we were a duo. We wrote to David and told him it was going to be a collaboration.

That’s when the problems started. No, not with Damien, he was great. With the mocap. I purchased the file from TurboSquid and tried to convert it from .BIP to .BVH so I could use it in Blender. It didn’t work. It really didn’t work.

The flow was to load the .BIP onto a biped in 3Ds Max, then export the animation as .FBX, open it in Motion Builder, clean the object tree, export it as .BVH, open the rigged model in Blender and load .BVH onto it. But somewhere in this broken telephone the information was not translated properly and all I could get on the other side was a cubist deformed pile of bones. I tried everything. I tried random combinations of export settings, I tried BVHacker, I googled every term imaginable, I read forums with desperate lonely comments posted in 2008, I waited for the planets to align, I called my mom…

At the same time Damien and I were clocking insane hours at the office – funny enough, on another advent calendar project – and there was very little time for anything else. The deadline for our experiment was approaching and we were not ready. We tried to give it a last push on the last day (December 3rd), but it didn’t happen. We missed the deadline. David was sad. We were sad. We ended up going live with a bloody ‘coming soon’ placeholder.

I never really managed to solve the .BIP to .BVH problem. In the end what worked for me was to create a pose for the biped in 3Ds Max, then export it as .DAE and import it directly in Blender (skipping Motion Builder), then rig the character again based on the new pose and adjust the twisted bones one by one, frame by frame. It was laborious, but at least it was getting somewhere.

We ended up going live 16 days later, on the 19th of December. Still crazy busy at work, but trying to progress with the experiment on every spare hour. No more time or energy to go crazy with shaders, unfortunately. I wanted to recreate some visualisations from my 2013 version, like the popping circles and the disco lines, I think they would look good in 3D, but I’ll have to leave them for the next time. What I ended up using was a combination of point lights with lambert shading and a directional light with a hatching shader.


Christmas Experiments 2016: Billie Deer

Thanks:
To Damien for the partnership, to David and William for being patient with us, to Michael Jackson for the beat, to Elliot Dear for the gif, to Mr.doob and all the amazing people making Three.js, to Ben Houston for tidying up the animation classes, to the ones behind the Blender exporter, to the people that take the time to upload tutorial videos to YouTube, to the guy from TurboSquid that took 72 hours to reply saying that their conversion support doesn’t cover animations, to the people that created a GUI to edit mocap just for Second Life (BVHacker), to the lonely guy that posted a question in 2008 and is still waiting for an answer and to Konami for the code.

Check the experiment here: http://christmasexperiments.com/2016/03/billie-deer

Circular Images

circular images

It was a sunny day in London. I was sitting under a tree and had this idea: what if I could scan an image draw the bright areas with circular lines? Bullshit. I stole it. This guy did it first.

I absolutely loved the visuals and had to do it myself. He tagged it as #processing but I didn’t find a sketch or a video so I could only guess how it worked. Once I got the basics working I had a few ideas for variations and threw some sliders in. It’s not completely new, but I like to think my small additions are valid.

There is more to explore. Maybe make the circles pulse with sound. Maybe add 3D. Maybe write a shader and run it over a video. These could all be really cool. If someone wants to try please go ahead and let me know what you did. I might try them too. But for now I just want to release this as is. A quick experiment.

I imagine this could be a nice artwork for an album cover. If you agree and happen to know just the band/artist, get in touch and I’ll be happy to work on a version with good print quality size.

Thank you @williamapan for the amazing ‘Shout’ photo.

Check the experiment here.

Teen Spirit

teen-spirit-01

Even before I finished my previous experiment I already knew the next one was going to be about sound. I wanted to do music visualization in the browser.

And one more time the result of the experiment is quite different from the initial idea. But I quite like that, it is one of nice things of experimentation, the direction can change any time something interesting appears in the process.

I started by playing around with Web Audio API and looking for references. I found this cool project called Decorated Playlists, a website ‘dedicated to the close relationship between music & design.’ One of the playlists – Run For Cover – had some nice visuals with bold lines coming towards the viewer with a strong perspective. I imagined how those lines could react to music and replicated them in code. On these initial tests I was using a song from that playlist called Espiritu Adolescente, by Mandrágora Tango Orchestra – which is a cool tango version of Nirvana’s Smells Like Teen Spirit. It was looking good, but nothing special. I tried a few variations here and there and eventually dropped the idea.

teen-spirit-03

Time for a new experiment. And a new song. I am a big fan of rock, so I started looking for the next tune in my own music library. I chose God Hates A Coward, by Tomahawk. I love this song. There is an awesome live version on YouTube where we can see Mike Patton barking the lyrics behind a mask.

That mask could be interesting to use in a visualization, so I started googling images of masks.
This one grabbed my attention. It seems to be a drawing based on this photo, but instead of the text on the cylinder, there are just lines. Once again I imagined how those lines could look when reacting to music.

teen-spirit-04

So I went to code to try to replicate that cylinder. I tried a few geometries in three.js, but I realized I need more control of the vertices. It was one of those moments when my brain just wouldn’t shut down. I remember figuring out how to do it on the street walking back from lunch. The solution was to divide the bars into segments and then stretch the vertices only until the limit of the segment. i.e. if a bar has 10 segments and the value it needs to represent is 0.96, it is not like the entire bar is scaled down to 0.96, with this system the first 9 segments are assuming values of 0.1 and only the last segment is scaled down to 0.06. Then those segments can be distributed around a circle and the shape is preserved for any value.

The more I saw those bars reacting to music in 3D, the further away I got from the idea of the mask. The circular bars had something on their own and I couldn’t stop playing with them. Eventually I dropped the idea of the mask. And dropped also the song I was using. In my tests I found that the bars were reacting much better to the Nirvana tango I was using earlier. At this point the two experiments merged.

I feel stupid
And contagious
Here we are now
Entertain us

I didn’t know exactly what to do with all those shapes. All I knew was that some of them were looking pretty cool at certain camera angles and light positions, so I started to create some scenes with my favourite settings. I have to say it was a constant battle in my head between using pre-defined scenes or make everything dynamic. Some people might just start clicking and close the experiment because it doesn’t react. But I wanted to make something tailored for that song. Something like Robert Hodgin’s Solar. Everything is generated by code and runs real-time in the browser, but it could also be a video.

teen-spirit-02

The creation of these scenes is what took most of the time. There was a lot of experimentation and a lot of stuff didn’t make it to the final version. Together with the sound reactive bars, I can say there were two other major accomplishments: one was to finally get my head around quaternions to be able to tween the camera smoothly – I should write another post about that I wrote on stackoverflow instead – and the other was to add the words ‘hello’ and ‘how low’ in a way that would fit well with the visuals.

I plan to explore these two topics a bit more in the future. And I definitely want to do more music visualization. Hopefully next time with some rock n’ roll!

Mind The Gap

mind-the-gap-02

After I finished my previous experiment with the Web Audio API I was looking for something else to do with sound. I had this idea of using London Underground’s data and playing a note every time a train left a station. I could assign a different note for each line and the live feed would create random music all day long. So I started checking TfL Developers’ Area and it didn’t take long to realize that my idea wouldn’t be possible. The data does show predicted arrival times for each station, but these are rounded to 30 seconds. If the experiment were to use the data literally, it would stay silent for 30s, then play a bunch of notes at the same time, then go back to silence for another 30s. A friend even suggested randomizing some values in between those 30s, but that wouldn’t be any different from just listening to some random notes chosen by the computer, without any connection to the tube.

OK, that idea was gone, but the data was quite interesting. With the rounded times I could tween the position of the trains between stations. It would be cool to see the trains moving in ‘almost’ real time on the screen, wouldn’t it? Oh wait, someone did it already: Live map of London Underground, by Matthew Somerville. And it is nice, but not really what I had in mind. I wanted more of a cool visualization based on the tube data, rather than an informative/useful map. How could I do something new with this data? Add a third dimension maybe? Three.js was on my list of things to experiment with for a long time and this seemed like the right opportunity. Oh wait, has someone done it already? The only thing I could find was this and it is definitely not what I had in mind. So yeah, green light!

mind-the-gap-01

To do a 3D map of the tube I would need x,y,z values. It is amazing what we can find on the internet, it’s all there. Depth of tube lines and London Underground Station Locations.

I had everything I needed: train times, latitude, longitude and depth of the stations. Those were coming from many different files, so I stretched my regex skills and created a simple tool with Adobe AIR to parse everything and output a consolidated .json for me. With that I could finally plot some points in space using the Mercator projection. The next step was to create tubes connecting these points and again I was really lucky to find exactly what I needed online. Three.js is an amazing library not only because of what it does, but also because of how it is made. Together with the classes I needed (TubeGeometry and SplineCurve3), I also found the conversation between the authors while they were developing these classes.

One of the biggest challenges was to make the trains follow the spline and sometimes change splines depending on the train’s destination. I feel that my algorithm could be more solid here, but it is working well. The last touches were to add the labels for each station and add some ambient sound recorded at the tube.

That’s it. I hope people find it interesting and play with it.

Launch the experiment.