Go With The Flow

Queens of the Stone Age is one of my favourite bands. I go watch them live whenever I get the chance. Last time I saw them, they had a big screen on the stage with some cool visuals for each song. I recorded this video with my phone during ‘Go With The Flow’:

A bunch of bidents travelling in space, flocking, going around bends and coming towards the camera.

Wikipedia: A bident is a two-pronged implement resembling a pitchfork.

A couple of weeks ago I was going through my files, watched this video again and wondered if I could replicate it in WebGL.

Path
To me it seemed like the bidents were following a path in the video, so I started by revisiting an old experiment with steering behaviors and adapted it to 3D. The path itself was generated using a formula extracted from TorusKnotGeometry. In this case a simple curve using p = 2 and q = 4.

Model
At first I created the model in code. One cylinder for the stick, then two curves, two cylinders and two cones for the top. It looked ok, but the performance was terrible on mobile. I moved to Blender and recreated it there – thanks to all the amazing people that post tutorials and videos online.

Then I adapted the code to use InstancedBufferGeometry. To my surprise, the performance on mobile was even worse. I found out that vertices are duplicated when using the .fromGeometry() method. I joined the discussion on this github issue and proposed a solution, but I think it doesn’t work for all cases. It worked on mine. Simple modifications to include indices in the generated geometry made it much faster on mobile. I started getting 60 fps on my 5 years old phone. So in case you found this post while looking for issues with .fromGeometry(), have a look at these modifications here, or check this hack that copies indices from each face of the original geometry.

Camera
I’ve been wanting to play with spite’s Storyline.js since I bumped into it online. This was the perfect opportunity. It is simple and works very well. I chose a few camera positions around the path and linked them in the storyboard using the t of the curve. With that I could be sure the bidents were always passing by where the camera was there.

Background
In the original video I think the background was just a pure red color. It looks great, but I thought it was looking too flat in the browser. The inspiration then came from the music video for ‘Go With The Flow’ – IMHO one of the best music videos ever made, hats off to Shynola.

The result is a mishmash of noise shaders found online, specially Procedural SkyBox by Passion.

Check the result here: http://codedoodl.es/_/brunoimbrizi/go-with-the-flow/

Check the result here: http://codedoodl.es/_/brunoimbrizi/go-with-the-flow/

Billie Deer

Three years ago my friend David invited me to participate on his Christmas Experiments project – an advent calendar with one code experiment a day. He gave me about two months notice so I had plenty of time, but I spent most of it just trying to have an idea. There were a bunch of false starts until one happy day when I googled ‘christmas gifs’ and found this:

The Way Christmas Makes Me Feel, by Elliot Dear

Half reindeer, half Michael Jackson. Can’t go wrong with that. My idea was to trace the silhouette of the character and then use that as a base to create visualizations in HTML Canvas. And that’s what I did. And people liked it. It was fun!


http://christmasexperiments.com/2013/03/make-me-feel/

This year David invited me to join the Christmas Experiments again, but this time I had only 3 weeks. I knew I had to start straight away. The idea had to come fast. And it did. How about a tribute to my old experiment, but this time in 3D? Wow, such brilliant, very technology, much moves.

Next, a quick feasibility check. I downloaded Blender, watched a few tutorials on modeling and rigging, found a couple of generic male models, a couple of reindeer heads and, most importantly, I found this mocap:


TurboSquid: Dance MJ Loop 03 – Pelvic Thrust

OK so all I had to do was to throw all those ingredients in a pan and start cooking. I thought I would have a dancing model in a Three.js scene in a couple of days and then I could go crazy on the shaders to make some cool visualizations.

I was wrong.

It started well. I learned the basics of modelling in Blender and was able to chop this guy’s head off and replace it with this deer head. He was looking cool. I called him mandeer.

I learned how to rig (following mainly these videos) and started testing some free mocap using the Makewalk plugin for Blender.

Around this time I showed the prototype to Damien at work and he got interested. We discussed a few ideas for the sound and he was keen to work on it. From that point onward we were a duo. We wrote to David and told him it was going to be a collaboration.

That’s when the problems started. No, not with Damien, he was great. With the mocap. I purchased the file from TurboSquid and tried to convert it from .BIP to .BVH so I could use it in Blender. It didn’t work. It really didn’t work.

The flow was to load the .BIP onto a biped in 3Ds Max, then export the animation as .FBX, open it in Motion Builder, clean the object tree, export it as .BVH, open the rigged model in Blender and load .BVH onto it. But somewhere in this broken telephone the information was not translated properly and all I could get on the other side was a cubist deformed pile of bones. I tried everything. I tried random combinations of export settings, I tried BVHacker, I googled every term imaginable, I read forums with desperate lonely comments posted in 2008, I waited for the planets to align, I called my mom…

At the same time Damien and I were clocking insane hours at the office – funny enough, on another advent calendar project – and there was very little time for anything else. The deadline for our experiment was approaching and we were not ready. We tried to give it a last push on the last day (December 3rd), but it didn’t happen. We missed the deadline. David was sad. We were sad. We ended up going live with a bloody ‘coming soon’ placeholder.

I never really managed to solve the .BIP to .BVH problem. In the end what worked for me was to create a pose for the biped in 3Ds Max, then export it as .DAE and import it directly in Blender (skipping Motion Builder), then rig the character again based on the new pose and adjust the twisted bones one by one, frame by frame. It was laborious, but at least it was getting somewhere.

We ended up going live 16 days later, on the 19th of December. Still crazy busy at work, but trying to progress with the experiment on every spare hour. No more time or energy to go crazy with shaders, unfortunately. I wanted to recreate some visualisations from my 2013 version, like the popping circles and the disco lines, I think they would look good in 3D, but I’ll have to leave them for the next time. What I ended up using was a combination of point lights with lambert shading and a directional light with a hatching shader.


Christmas Experiments 2016: Billie Deer

Thanks:
To Damien for the partnership, to David and William for being patient with us, to Michael Jackson for the beat, to Elliot Dear for the gif, to Mr.doob and all the amazing people making Three.js, to Ben Houston for tidying up the animation classes, to the ones behind the Blender exporter, to the people that take the time to upload tutorial videos to YouTube, to the guy from TurboSquid that took 72 hours to reply saying that their conversion support doesn’t cover animations, to the people that created a GUI to edit mocap just for Second Life (BVHacker), to the lonely guy that posted a question in 2008 and is still waiting for an answer and to Konami for the code.

Check the experiment here: http://christmasexperiments.com/2016/03/billie-deer

How to Draw an Offset Curve

I was prototyping something and I needed to draw a curve with some thickness. It wasn’t just the case of increasing the thickness of the stroke, I wanted to find the contour of a curve, to draw two new curves around one in the center. After some research, I learnt that the correct term for that is parallel curve or offset curve.

The task turned out to be not as simple as I thought. After some failed attempts I found the solution in a paper by Gabriel Suchowolski entitled ‘Quadratic bezier offsetting with selective subdivision‘. The recipe is there, but I was missing an open source implementation so I decided to write one.
In this post I present a step by step process and at the end an interactive version written in Javascript.

How to draw an offset curve:

Start with 3 points.
wide-01

Draw a quadratic curve using p1 and p2 as anchors and c as the control point.
wide-02

Get the vectors between these points.
v1 = c - p1
v2 = p2 - c

Find the vector perpendicular to v1 and scale it to the width (or thickness) of the new curve.
Add the new temporary vector to p1 to find p1a, then subtract from p1 it to find p1b.
Do the same with c to find c1a and c1b.
wide-03 width=

Repeat the same process with v2 to find the points on the other side.
wide-04

Find vectors between the new points. These are parallel to v1 and v2 and offset by the given thickness.
wide-05

The intersection points of these vectors are the new control points ca and cb.
wide-06

Draw a curve from p1a to p2a with control point at ca.
Draw another curve from p1b to p2b with control point at cb.
wide-07

This method works only when the angle between v1 and v2 is wide (bigger than 90 degrees), it doesn’t work for sharp angles.
wide-08

For angles smaller than 90 degrees it is necessary to split the curve. In fact the curve could be split several times, the more the better the precision of the offset curve. To do it only at 90 degrees is fast and the result is not too bad.

The curve needs to be split at t, which is the closest point to c in the curve. The technique to find t has been described in the paper I mentioned before. It requires solving a third degree polynomial like this ax3+bx2+cx+d=0
third-degree-polynomial

The equation returns a number between 0 and 1 that can be plotted in the curve to find t.
curve-07

Find the tangent of t and the points t1 and t2 where it intersects v1 and v2.
Create a new vector perpendicular to the tangent of t, scale it to the given thickness and find qa and qb. This vector splits the original curve at t.
curve-08

Add the tangent of t to qa and qb and find the points where it intersects the offset vectors.
curve-09

These are all the points needed to draw an offset curve. All the others that were created in the process can be removed for clarity.
Draw a curve with anchors at p1a and qa with the control point at q1a.
curve-10

Repeat the process for all the new points to get the offset curve.
curve-11

Here is an interactive version. Drag the gray dots to change the curve.

See the Pen VYEWgY by Bruno Imbrizi (@brunoimbrizi) on CodePen.

Thanks to:
– Gabriel Suchowolski (aka @microbians) for his paper
toxiclibs for the really handy Vec2D and Line2D classes
– Professor Eric Schechter for The Cubic Formula

Circular Images

circular images

It was a sunny day in London. I was sitting under a tree and had this idea: what if I could scan an image draw the bright areas with circular lines? Bullshit. I stole it. This guy did it first.

I absolutely loved the visuals and had to do it myself. He tagged it as #processing but I didn’t find a sketch or a video so I could only guess how it worked. Once I got the basics working I had a few ideas for variations and threw some sliders in. It’s not completely new, but I like to think my small additions are valid.

There is more to explore. Maybe make the circles pulse with sound. Maybe add 3D. Maybe write a shader and run it over a video. These could all be really cool. If someone wants to try please go ahead and let me know what you did. I might try them too. But for now I just want to release this as is. A quick experiment.

I imagine this could be a nice artwork for an album cover. If you agree and happen to know just the band/artist, get in touch and I’ll be happy to work on a version with good print quality size.

Thank you @williamapan for the amazing ‘Shout’ photo.

Check the experiment here.

Teen Spirit

teen-spirit-01

Even before I finished my previous experiment I already knew the next one was going to be about sound. I wanted to do music visualization in the browser.

And one more time the result of the experiment is quite different from the initial idea. But I quite like that, it is one of nice things of experimentation, the direction can change any time something interesting appears in the process.

I started by playing around with Web Audio API and looking for references. I found this cool project called Decorated Playlists, a website ‘dedicated to the close relationship between music & design.’ One of the playlists – Run For Cover – had some nice visuals with bold lines coming towards the viewer with a strong perspective. I imagined how those lines could react to music and replicated them in code. On these initial tests I was using a song from that playlist called Espiritu Adolescente, by Mandrágora Tango Orchestra – which is a cool tango version of Nirvana’s Smells Like Teen Spirit. It was looking good, but nothing special. I tried a few variations here and there and eventually dropped the idea.

teen-spirit-03

Time for a new experiment. And a new song. I am a big fan of rock, so I started looking for the next tune in my own music library. I chose God Hates A Coward, by Tomahawk. I love this song. There is an awesome live version on YouTube where we can see Mike Patton barking the lyrics behind a mask.

That mask could be interesting to use in a visualization, so I started googling images of masks.
This one grabbed my attention. It seems to be a drawing based on this photo, but instead of the text on the cylinder, there are just lines. Once again I imagined how those lines could look when reacting to music.

teen-spirit-04

So I went to code to try to replicate that cylinder. I tried a few geometries in three.js, but I realized I need more control of the vertices. It was one of those moments when my brain just wouldn’t shut down. I remember figuring out how to do it on the street walking back from lunch. The solution was to divide the bars into segments and then stretch the vertices only until the limit of the segment. i.e. if a bar has 10 segments and the value it needs to represent is 0.96, it is not like the entire bar is scaled down to 0.96, with this system the first 9 segments are assuming values of 0.1 and only the last segment is scaled down to 0.06. Then those segments can be distributed around a circle and the shape is preserved for any value.

The more I saw those bars reacting to music in 3D, the further away I got from the idea of the mask. The circular bars had something on their own and I couldn’t stop playing with them. Eventually I dropped the idea of the mask. And dropped also the song I was using. In my tests I found that the bars were reacting much better to the Nirvana tango I was using earlier. At this point the two experiments merged.

I feel stupid
And contagious
Here we are now
Entertain us

I didn’t know exactly what to do with all those shapes. All I knew was that some of them were looking pretty cool at certain camera angles and light positions, so I started to create some scenes with my favourite settings. I have to say it was a constant battle in my head between using pre-defined scenes or make everything dynamic. Some people might just start clicking and close the experiment because it doesn’t react. But I wanted to make something tailored for that song. Something like Robert Hodgin’s Solar. Everything is generated by code and runs real-time in the browser, but it could also be a video.

teen-spirit-02

The creation of these scenes is what took most of the time. There was a lot of experimentation and a lot of stuff didn’t make it to the final version. Together with the sound reactive bars, I can say there were two other major accomplishments: one was to finally get my head around quaternions to be able to tween the camera smoothly – I should write another post about that I wrote on stackoverflow instead – and the other was to add the words ‘hello’ and ‘how low’ in a way that would fit well with the visuals.

I plan to explore these two topics a bit more in the future. And I definitely want to do more music visualization. Hopefully next time with some rock n’ roll!

Mind The Gap

mind-the-gap-02

After I finished my previous experiment with the Web Audio API I was looking for something else to do with sound. I had this idea of using London Underground’s data and playing a note every time a train left a station. I could assign a different note for each line and the live feed would create random music all day long. So I started checking TfL Developers’ Area and it didn’t take long to realize that my idea wouldn’t be possible. The data does show predicted arrival times for each station, but these are rounded to 30 seconds. If the experiment were to use the data literally, it would stay silent for 30s, then play a bunch of notes at the same time, then go back to silence for another 30s. A friend even suggested randomizing some values in between those 30s, but that wouldn’t be any different from just listening to some random notes chosen by the computer, without any connection to the tube.

OK, that idea was gone, but the data was quite interesting. With the rounded times I could tween the position of the trains between stations. It would be cool to see the trains moving in ‘almost’ real time on the screen, wouldn’t it? Oh wait, someone did it already: Live map of London Underground, by Matthew Somerville. And it is nice, but not really what I had in mind. I wanted more of a cool visualization based on the tube data, rather than an informative/useful map. How could I do something new with this data? Add a third dimension maybe? Three.js was on my list of things to experiment with for a long time and this seemed like the right opportunity. Oh wait, has someone done it already? The only thing I could find was this and it is definitely not what I had in mind. So yeah, green light!

mind-the-gap-01

To do a 3D map of the tube I would need x,y,z values. It is amazing what we can find on the internet, it’s all there. Depth of tube lines and London Underground Station Locations.

I had everything I needed: train times, latitude, longitude and depth of the stations. Those were coming from many different files, so I stretched my regex skills and created a simple tool with Adobe AIR to parse everything and output a consolidated .json for me. With that I could finally plot some points in space using the Mercator projection. The next step was to create tubes connecting these points and again I was really lucky to find exactly what I needed online. Three.js is an amazing library not only because of what it does, but also because of how it is made. Together with the classes I needed (TubeGeometry and SplineCurve3), I also found the conversation between the authors while they were developing these classes.

One of the biggest challenges was to make the trains follow the spline and sometimes change splines depending on the train’s destination. I feel that my algorithm could be more solid here, but it is working well. The last touches were to add the labels for each station and add some ambient sound recorded at the tube.

That’s it. I hope people find it interesting and play with it.

Launch the experiment.

Geomerative Sketch

geomerative-01

In my previous post I talked about my attempts on Processing + Typography, but I didn’t post any interactive example. Not because I didn’t want to, but because my sketch is using the OPENGL renderer and it is tricky to publish applets with it. Last night I received a notification about a reply to a post at the processing forum with some instructions to do just that. Now the applet is published.

geomerative-02

Works fine for me. I asked a few friends to test it and it didn’t work for everyone. If it will work for you or not depends on platform and JRE version – and probably the lunar phase and many other things. Please give it try. Source code is also available.

Launch the sketch.

Processing + Typography

When I first got my hands on the Form + Code book what impressed me the most was the artwork with typography on the titles and opening pages. It reminded me how much I like typography and how focusing on the shape of letters and text can produce beautiful visuals. Then of course it got me trying to reproduce those visuals using Processing. There is no code in the book to help with that. All we get are some short descriptions at the bottom of each title page, like this one:

form-code-04
Form+Code page 9

That is a good hint. I knew I had to start with Python, which didn’t take long to lead me to TTX. It is a great piece of software that describes TrueType and OpenType fonts in an XML-based format. XML is much easier to read than bytecode and I guess the authors of Form+Code used TTX at some point. But having the fonts described in a text format is not enough, it is necessary to write Processing software to parse all that and create the points and curves and shapes. TTX to Processing: that would be an interesting library. Before I started to write one myself (like it would be easy), I went on a search for other contributed libraries to handle typography.

Which lead me to Geomerative. A fantastic library to work with TrueType fonts and SVG shapes in Processing. I was very happy when I found it. It was not Python or Postscript, but it would let me achieve similar visuals in a much quicker way.

form-code-01
Form+Code page 26
typography-01
similar visuals made with geomerative

form-code-02
Form+Code page 42
typography-02
similar visuals made with geomerative

form-code-03
Form+Code page 92
typography-03
similar visuals made with geomerative

 
Why it doesn’t look the same
I’ve put some effort to copy the exact look of the titles in the book, but it still looks different. Why?
It all comes down to the differences between TrueType and Postscript.

The external contours are drawn in opposite directions, counter-clockwise on Postscript and clockwise on TrueType. This also affects the starting points of each glyph: Postscript starts where TrueType ends and vice-versa. This difference becomes evident on the “Computers” example.

ttf-01
Postscript vs TrueType: start point and direction

There is also a big difference in the way curves are drawn in the two formats. Postscript use cubic Bézier splines while TrueType uses quadratic Bézier splines. Which means TrueType needs more anchor points than Postscript to draw similar curves. The difference becomes evident on the “Working” example.

ttf-02
Postscript vs TrueType: curves

 
Geomerative and OpenType
The OpenType file format can describe both TrueType and Postscript outlines. Is it possible to use OpenType fonts (Postscript flavored) with Geomerative? No. At least not with the current version (rev 34). Geomerative uses a Java library called Batik to parse the TrueType format and convert it to SVG instructions. The Batik library (release 1.7) ignores the CFF table used to describe Postscript outlines. I searched for updates to the Batik library to see if there are plans to support CFF, but it doesn’t look like that is going to happen.

One interesting thing I found was that the Adobe Flex SDK also uses Batik to handle TrueType fonts. Actually, there are four font managers: Batik and JRE to manage TrueType and AFE and CFF to manage TrueType and OpenType.

This post is coming to an end but there aren’t many conclusions to make. Except that, as I read somewhere, OpenType file format is a hard nut to crack. The next steps could be: find a parser for the CFF table and adapt it to Geomerative; or write a parser for the TTX format as I mentioned at the beginning; or accept the formats differences and focus on the visuals of the other titles in Form+Code book. For now, I’ll probably go for the latest option. I’m enjoying the ride and I hope I can post some updates here soon.

flag.in for PlayBook

My first PlayBook app is now available for free on BlackBerry App World.


flag.in – A quiz about flags, countries and capitals

The idea to make it started when I became addicted to a flag quiz game on my Android phone. It was quite fun to play, but it was too ugly and I thought it was missing some features. So why not build one myself? I went on with the concept, a few sketches and I got myself an Android development book. The progress was very slow because I was learning the language while building it.

Then I attended to Flash Camp Brasil 2011. RIM was there showcasing their new tablet and people got pretty excited about it. Since it is possible to build native apps for the PlayBook using AIR, why not put my ‘flags quiz’ project on track and build it with ActionScript? I started the development as soon as I got back home.

I had a good time working on it. The only differences from a desktop AIR project are the setup (downloading the SDK, installing the simulator, etc) and working with the QNX components. I have already talked about these two topics here.

flagin-01
flag.in game types: country » capital and flag » country

For future updates I think about adding sound, a horizontal layout, statistics, etc. But I am more interested in what people have to say about it. So go get flag.in and make sure you leave your review and comments. Cheers!

Big Thank You:
– to my beautiful girlfriend Pri, for helping me with the countries info;
– to my friend Toninho, for design tips and early logo designs;
– and to Cássio, for testing the app on a real device.