How to Draw an Offset Curve

I was prototyping something and I needed to draw a curve with some thickness. It wasn’t just the case of increasing the thickness of the stroke, I wanted to find the contour of a curve, to draw two new curves around one in the center. After some research, I learnt that the correct term for that is parallel curve or offset curve.

The task turned out to be not as simple as I thought. After some failed attempts I found the solution in a paper by Gabriel Suchowolski entitled ‘Quadratic bezier offsetting with selective subdivision‘. The recipe is there, but I was missing an open source implementation so I decided to write one.
In this post I present a step by step process and at the end an interactive version written in Javascript.

How to draw an offset curve:

Start with 3 points.

Draw a quadratic curve using p1 and p2 as anchors and c as the control point.

Get the vectors between these points.
v1 = c - p1
v2 = p2 - c

Find the vector perpendicular to v1 and scale it to the width (or thickness) of the new curve.
Add the new temporary vector to p1 to find p1a, then subtract from p1 it to find p1b.
Do the same with c to find c1a and c1b.
wide-03 width=

Repeat the same process with v2 to find the points on the other side.

Find vectors between the new points. These are parallel to v1 and v2 and offset by the given thickness.

The intersection points of these vectors are the new control points ca and cb.

Draw a curve from p1a to p2a with control point at ca.
Draw another curve from p1b to p2b with control point at cb.

This method works only when the angle between v1 and v2 is wide (bigger than 90 degrees), it doesn’t work for sharp angles.

For angles smaller than 90 degrees it is necessary to split the curve. In fact the curve could be split several times, the more the better the precision of the offset curve. To do it only at 90 degrees is fast and the result is not too bad.

The curve needs to be split at t, which is the closest point to c in the curve. The technique to find t has been described in the paper I mentioned before. It requires solving a third degree polynomial like this ax3+bx2+cx+d=0

The equation returns a number between 0 and 1 that can be plotted in the curve to find t.

Find the tangent of t and the points t1 and t2 where it intersects v1 and v2.
Create a new vector perpendicular to the tangent of t, scale it to the given thickness and find qa and qb. This vector splits the original curve at t.

Add the tangent of t to qa and qb and find the points where it intersects the offset vectors.

These are all the points needed to draw an offset curve. All the others that were created in the process can be removed for clarity.
Draw a curve with anchors at p1a and qa with the control point at q1a.

Repeat the process for all the new points to get the offset curve.

Here is an interactive version. Drag the gray dots to change the curve.

See the Pen VYEWgY by Bruno Imbrizi (@brunoimbrizi) on CodePen.

Thanks to:
– Gabriel Suchowolski (aka @microbians) for his paper
toxiclibs for the really handy Vec2D and Line2D classes
– Professor Eric Schechter for The Cubic Formula

Circular Images

circular images

It was a sunny day in London. I was sitting under a tree and had this idea: what if I could scan an image draw the bright areas with circular lines? Bullshit. I stole it. This guy did it first.

I absolutely loved the visuals and had to do it myself. He tagged it as #processing but I didn’t find a sketch or a video so I could only guess how it worked. Once I got the basics working I had a few ideas for variations and threw some sliders in. It’s not completely new, but I like to think my small additions are valid.

There is more to explore. Maybe make the circles pulse with sound. Maybe add 3D. Maybe write a shader and run it over a video. These could all be really cool. If someone wants to try please go ahead and let me know what you did. I might try them too. But for now I just want to release this as is. A quick experiment.

I imagine this could be a nice artwork for an album cover. If you agree and happen to know just the band/artist, get in touch and I’ll be happy to work on a version with good print quality size.

Thank you @williamapan for the amazing ‘Shout’ photo.

Check the experiment here.

Teen Spirit


Even before I finished my previous experiment I already knew the next one was going to be about sound. I wanted to do music visualization in the browser.

And one more time the result of the experiment is quite different from the initial idea. But I quite like that, it is one of nice things of experimentation, the direction can change any time something interesting appears in the process.

I started by playing around with Web Audio API and looking for references. I found this cool project called Decorated Playlists, a website ‘dedicated to the close relationship between music & design.’ One of the playlists – Run For Cover – had some nice visuals with bold lines coming towards the viewer with a strong perspective. I imagined how those lines could react to music and replicated them in code. On these initial tests I was using a song from that playlist called Espiritu Adolescente, by Mandrágora Tango Orchestra – which is a cool tango version of Nirvana’s Smells Like Teen Spirit. It was looking good, but nothing special. I tried a few variations here and there and eventually dropped the idea.


Time for a new experiment. And a new song. I am a big fan of rock, so I started looking for the next tune in my own music library. I chose God Hates A Coward, by Tomahawk. I love this song. There is an awesome live version on YouTube where we can see Mike Patton barking the lyrics behind a mask.

That mask could be interesting to use in a visualization, so I started googling images of masks.
This one grabbed my attention. It seems to be a drawing based on this photo, but instead of the text on the cylinder, there are just lines. Once again I imagined how those lines could look when reacting to music.


So I went to code to try to replicate that cylinder. I tried a few geometries in three.js, but I realized I need more control of the vertices. It was one of those moments when my brain just wouldn’t shut down. I remember figuring out how to do it on the street walking back from lunch. The solution was to divide the bars into segments and then stretch the vertices only until the limit of the segment. i.e. if a bar has 10 segments and the value it needs to represent is 0.96, it is not like the entire bar is scaled down to 0.96, with this system the first 9 segments are assuming values of 0.1 and only the last segment is scaled down to 0.06. Then those segments can be distributed around a circle and the shape is preserved for any value.

The more I saw those bars reacting to music in 3D, the further away I got from the idea of the mask. The circular bars had something on their own and I couldn’t stop playing with them. Eventually I dropped the idea of the mask. And dropped also the song I was using. In my tests I found that the bars were reacting much better to the Nirvana tango I was using earlier. At this point the two experiments merged.

I feel stupid
And contagious
Here we are now
Entertain us

I didn’t know exactly what to do with all those shapes. All I knew was that some of them were looking pretty cool at certain camera angles and light positions, so I started to create some scenes with my favourite settings. I have to say it was a constant battle in my head between using pre-defined scenes or make everything dynamic. Some people might just start clicking and close the experiment because it doesn’t react. But I wanted to make something tailored for that song. Something like Robert Hodgin’s Solar. Everything is generated by code and runs real-time in the browser, but it could also be a video.


The creation of these scenes is what took most of the time. There was a lot of experimentation and a lot of stuff didn’t make it to the final version. Together with the sound reactive bars, I can say there were two other major accomplishments: one was to finally get my head around quaternions to be able to tween the camera smoothly – I should write another post about that I wrote on stackoverflow instead – and the other was to add the words ‘hello’ and ‘how low’ in a way that would fit well with the visuals.

I plan to explore these two topics a bit more in the future. And I definitely want to do more music visualization. Hopefully next time with some rock n’ roll!

Mind The Gap


After I finished my previous experiment with the Web Audio API I was looking for something else to do with sound. I had this idea of using London Underground’s data and playing a note every time a train left a station. I could assign a different note for each line and the live feed would create random music all day long. So I started checking TfL Developers’ Area and it didn’t take long to realize that my idea wouldn’t be possible. The data does show predicted arrival times for each station, but these are rounded to 30 seconds. If the experiment were to use the data literally, it would stay silent for 30s, then play a bunch of notes at the same time, then go back to silence for another 30s. A friend even suggested randomizing some values in between those 30s, but that wouldn’t be any different from just listening to some random notes chosen by the computer, without any connection to the tube.

OK, that idea was gone, but the data was quite interesting. With the rounded times I could tween the position of the trains between stations. It would be cool to see the trains moving in ‘almost’ real time on the screen, wouldn’t it? Oh wait, someone did it already: Live map of London Underground, by Matthew Somerville. And it is nice, but not really what I had in mind. I wanted more of a cool visualization based on the tube data, rather than an informative/useful map. How could I do something new with this data? Add a third dimension maybe? Three.js was on my list of things to experiment with for a long time and this seemed like the right opportunity. Oh wait, has someone done it already? The only thing I could find was this and it is definitely not what I had in mind. So yeah, green light!


To do a 3D map of the tube I would need x,y,z values. It is amazing what we can find on the internet, it’s all there. Depth of tube lines and London Underground Station Locations.

I had everything I needed: train times, latitude, longitude and depth of the stations. Those were coming from many different files, so I stretched my regex skills and created a simple tool with Adobe AIR to parse everything and output a consolidated .json for me. With that I could finally plot some points in space using the Mercator projection. The next step was to create tubes connecting these points and again I was really lucky to find exactly what I needed online. Three.js is an amazing library not only because of what it does, but also because of how it is made. Together with the classes I needed (TubeGeometry and SplineCurve3), I also found the conversation between the authors while they were developing these classes.

One of the biggest challenges was to make the trains follow the spline and sometimes change splines depending on the train’s destination. I feel that my algorithm could be more solid here, but it is working well. The last touches were to add the labels for each station and add some ambient sound recorded at the tube.

That’s it. I hope people find it interesting and play with it.

Launch the experiment.

Geomerative Sketch


In my previous post I talked about my attempts on Processing + Typography, but I didn’t post any interactive example. Not because I didn’t want to, but because my sketch is using the OPENGL renderer and it is tricky to publish applets with it. Last night I received a notification about a reply to a post at the processing forum with some instructions to do just that. Now the applet is published.


Works fine for me. I asked a few friends to test it and it didn’t work for everyone. If it will work for you or not depends on platform and JRE version – and probably the lunar phase and many other things. Please give it try. Source code is also available.

Launch the sketch.

Processing + Typography

When I first got my hands on the Form + Code book what impressed me the most was the artwork with typography on the titles and opening pages. It reminded me how much I like typography and how focusing on the shape of letters and text can produce beautiful visuals. Then of course it got me trying to reproduce those visuals using Processing. There is no code in the book to help with that. All we get are some short descriptions at the bottom of each title page, like this one:

Form+Code page 9

That is a good hint. I knew I had to start with Python, which didn’t take long to lead me to TTX. It is a great piece of software that describes TrueType and OpenType fonts in an XML-based format. XML is much easier to read than bytecode and I guess the authors of Form+Code used TTX at some point. But having the fonts described in a text format is not enough, it is necessary to write Processing software to parse all that and create the points and curves and shapes. TTX to Processing: that would be an interesting library. Before I started to write one myself (like it would be easy), I went on a search for other contributed libraries to handle typography.

Which lead me to Geomerative. A fantastic library to work with TrueType fonts and SVG shapes in Processing. I was very happy when I found it. It was not Python or Postscript, but it would let me achieve similar visuals in a much quicker way.

Form+Code page 26
similar visuals made with geomerative

Form+Code page 42
similar visuals made with geomerative

Form+Code page 92
similar visuals made with geomerative

Why it doesn’t look the same
I’ve put some effort to copy the exact look of the titles in the book, but it still looks different. Why?
It all comes down to the differences between TrueType and Postscript.

The external contours are drawn in opposite directions, counter-clockwise on Postscript and clockwise on TrueType. This also affects the starting points of each glyph: Postscript starts where TrueType ends and vice-versa. This difference becomes evident on the “Computers” example.

Postscript vs TrueType: start point and direction

There is also a big difference in the way curves are drawn in the two formats. Postscript use cubic Bézier splines while TrueType uses quadratic Bézier splines. Which means TrueType needs more anchor points than Postscript to draw similar curves. The difference becomes evident on the “Working” example.

Postscript vs TrueType: curves

Geomerative and OpenType
The OpenType file format can describe both TrueType and Postscript outlines. Is it possible to use OpenType fonts (Postscript flavored) with Geomerative? No. At least not with the current version (rev 34). Geomerative uses a Java library called Batik to parse the TrueType format and convert it to SVG instructions. The Batik library (release 1.7) ignores the CFF table used to describe Postscript outlines. I searched for updates to the Batik library to see if there are plans to support CFF, but it doesn’t look like that is going to happen.

One interesting thing I found was that the Adobe Flex SDK also uses Batik to handle TrueType fonts. Actually, there are four font managers: Batik and JRE to manage TrueType and AFE and CFF to manage TrueType and OpenType.

This post is coming to an end but there aren’t many conclusions to make. Except that, as I read somewhere, OpenType file format is a hard nut to crack. The next steps could be: find a parser for the CFF table and adapt it to Geomerative; or write a parser for the TTX format as I mentioned at the beginning; or accept the formats differences and focus on the visuals of the other titles in Form+Code book. For now, I’ll probably go for the latest option. I’m enjoying the ride and I hope I can post some updates here soon. for PlayBook

My first PlayBook app is now available for free on BlackBerry App World. – A quiz about flags, countries and capitals

The idea to make it started when I became addicted to a flag quiz game on my Android phone. It was quite fun to play, but it was too ugly and I thought it was missing some features. So why not build one myself? I went on with the concept, a few sketches and I got myself an Android development book. The progress was very slow because I was learning the language while building it.

Then I attended to Flash Camp Brasil 2011. RIM was there showcasing their new tablet and people got pretty excited about it. Since it is possible to build native apps for the PlayBook using AIR, why not put my ‘flags quiz’ project on track and build it with ActionScript? I started the development as soon as I got back home.

I had a good time working on it. The only differences from a desktop AIR project are the setup (downloading the SDK, installing the simulator, etc) and working with the QNX components. I have already talked about these two topics here.

flagin-01 game types: country » capital and flag » country

For future updates I think about adding sound, a horizontal layout, statistics, etc. But I am more interested in what people have to say about it. So go get and make sure you leave your review and comments. Cheers!

Big Thank You:
– to my beautiful girlfriend Pri, for helping me with the countries info;
– to my friend Toninho, for design tips and early logo designs;
– and to Cássio, for testing the app on a real device.