My 2D physics engine has gone a long way since I started the project on github. It’s been integrated in one of the biggest 2D HTML5 game frameworks, Phaser. It’s been starring in the official Google IO experiment for 2015. I’m just very happy and wanted to express it.
PhysicsToy – halfway there…
Recently, I’ve been working on a 2D physics editor called PhysicsToy. It makes it possible to create these kinds of simulations, without coding:
Before I get bored and want to start a new cool hobby project, I wanted to report the current status of the web app.
Frontend: Angular.js and Pixi.js
I used the p2.js debug renderer and polished it up a bit. Then I added some Angular.js magic. I’ve wanted to learn Angular for a while, and PhysicsToy was a great project to use it in. I hooked up Angular and connected it to a simple list-like menu. Then added some code for updating the p2.js world as the angular data changes. Viola, PhysicsToy was born.
Backend: Node.js and Postgres on Heroku
Another thing I wanted to try was Postgres. I’ve been using MySQL in other projects, but why not try something new, and at the same time choose open source.
Postgres didn’t let me down. It offered a JSON data type, which is convenient for my Angular scene data. Postgres seems more consistent and in general more thought through than MySQL, even though they are based on the same SQL language.
Before pushing the data to Postgres, I do some validation using JSON-schema. I use an interesting solution for version handling of the JSON: I store the scene data as it is and never upgrade it in the database, but I do on-the-fly upgrading when serving to the clients. The benefits of this solution are that the original scenes can be in all servers forever. And it’s ideal when the app is under development, with a constantly changing data model. The only bad part is that the upgrading takes some server juice.
Had a fun time coding this, I hope that it will grow to something big!
Upgrading to OSX 10.10 Yosemite
When upgrading my Macbook Pro 15″ Retina to Yosemite, Homebrew, Postgres and RabbitMQ broke. But, they were quite easy to get running again.
Homebrew
If you didn’t upgrade homebrew before you upgraded to Yosemite (why should you?), you might have a broken homebrew. None of the brew commands work. To get it working again, follow these steps.
First, update homebrew via git:
cd /usr/local/Library;
git pull origin master;
Next, use homebrew to update and clean your installed packages:
brew update;
brew prune;
brew doctor;
Homebrew should now work.
Postgres
When installing Yosemite, some Postgres folders are removed for some reason, and these folders are required for Postgres to run.
$ postgres -D /usr/local/var/postgres
FATAL: could not open directory "pg_tblspc": No such file or directory
To fix this, and prevent Yosemite to remove the folders again, run:
mkdir -p /usr/local/var/postgres/{pg_tblspc,pg_twophase,pg_stat_tmp}/
touch /usr/local/var/postgres/{pg_tblspc,pg_twophase,pg_stat_tmp}/.keep
RabbitMQ
RabbitMQ would not start for me, for some reason.
Status of node 'rabbit@xxx' ...
Error: unable to connect to node 'rabbit@xxx': nodedown
Checking the broker log, it said “cannot_delete_plugins_expand_dir”:
Error description:
{error,
{cannot_delete_plugins_expand_dir,
["/opt/local/var/lib/rabbitmq/mnesia/rabbit@schmac-plugins-expand",
{cannot_delete,
"/opt/local/var/lib/rabbitmq/mnesia/rabbit@schmac-plugins-expand",
eacces}]}}
This was clearly a permission problem, easily solved by setting the rabbitmq user as owner for the directory.
chown -R rabbitmq:rabbitmq /var/lib/rabbitmq/
How much garbage is my JavaScript producing?
While reading this article about garbage collection in JavaScript, a question popped up in my head… How can I test how much garbage a piece of code is producing?
I’ve written a small snippet that can measure this for you. What you need is a recent version of Google Chrome.
To be able to measure the memory heap size, we need to enable it in Chrome. Chrome can give you access to the state of the JavaScript heap (the memory allocated to JavaScript objects), but to get it to work you’ll need to start it with the –enable-precise-memory-info flag:
chrome --enable-precise-memory-info
Or, on OSX:
open -a "Google Chrome" --args --enable-precise-memory-info
Now, create an HTML file containing the following code.
This will write the number of bytes allocated between "// Start" and "// End" to console every 10th of a second.
My current version of Chrome (40.0.2214.115) is producing 40 bytes to run this function, that is why I remove 40 bytes from the output number. You may need to change this depending on your Chrome version and settings.
If you run this script, you will notice that the first output numbers are
2424
728
0
0
0
...
The first numbers are there probably because of initialization garbage. After a little while, the initialization garbage is gone and we see that the number of allocated bytes in the loop is 0.
Now, let's allocate something in the loop and see what happens. If I, for example, allocate a plain object inside the above loop, like this,
setInterval(function(){
var before = window.performance.memory.usedJSHeapSize;
var obj = new Object();
var diff = window.performance.memory.usedJSHeapSize - before;
console.log(diff-40);
}, 100);
then the output is
3360
752
56
56
56
56
...
We conclude that a plain JavaScript object takes 56 bytes to allocate.
You can use this code snippet to measure how much GC load different pieces of code allocates in your game loop. Why not try this JSFiddle here to get started? Good luck!
p2.js – 2D JavaScript physics
I recently started on a small 2D physics engine project that I for now call p2.js. It contains just the basics, spheres, particles, planes. They can interact either through frictionless contacts or spring forces.
There are a few reasons why I wanted to roll my own engine, however, the most important one for now was the question about using typed arrays or not. Now I’ve got the answer to that.
I followed Brandon Jones simple instructions on how to use the typed arrays correctly, and as one of his slides implies, I was indeed barfing rainbows when seeing the results. Okay, maybe not barfing rainbows, but I must agree that I was a bit surprised.
In total I get a performance gain of about 30% when using Float32Array instead of vector objects such as {x:2,y:1}. This number, 30%, is just a very rough estimate, because it depends on a lot of different parameters. However, my implementation made it relatively easy to switch between these. Another thing that I noticed was that switching between ordinary Arrays and Float32Array didn’t affect performance much at all, though there are a few other advantages of using the latter.
The key to using Float32Arrays is to avoid creating new ones. In the physics engine case this can get a bit tricky since there are things added and removed to the simulation in every timestep. A good example is the contacts. When two geometries collide I need a new ConactEquation instance in the engine. This is basically a holder for a number of vectors, so making a new instance every time it is needed is a no go. To solve this I made sure these objects are reused in between every timestep, and if there are excess objects, I store them for later use.
The scene you see in the image above is a simulation of 900 circles trapped in container consisting of 3 planes. The number of solver iterations is 10. I can get reasonable results by using fewer iterations too. Rendering is made in a small 2D demo renderer I built with Three.js.
I’m going to make a demo page for the engine soon, though for now only the code is available.
The split solver in Cannon.js
As I had recently implemented some graph traversing code, I was keen on using it in Cannon.js. You may think, why use graph algorithms in a physics engine? The idea is a bit tricky but I’ll try to explain.
In the usual contact solving case, we iterate over all contacts in the system. For each iteration, we transfer impulses from one body to another. In the example case of stacked boxes, iterating like this will make the top box “feel” impulses from the bottom box as the impulse travel through the stack (this depends on how many times we iterate over the system and what iteration order we use, though that’s another problem).
Many solvers have a “tolerance” parameter and it is used to check when to stop iterating. If the total error (read: contact overlap) is small, then the solution is good enough and we can stop. The tolerance is compared to the *sum* of all errors in the system.
Say we have one stack of boxes on a static plane and a sphere on the same plane. The total error will include both the errors from the stack and from the sphere. We will stop iterating over all the contacts when the total error is below the tolerance limit. Say we reach M iterations. This means we compute stuff M times on each of the N contacts in the system (a total of N*M computations).
Now what if we split the system into two independent systems, one for the stack+plane and one for the sphere+plane, and run the solver once for each of the two systems? The stack will probably still need M iterations, but the interesting thing is that the sphere will only need one. Why? Because the case of a single contact does not need to propagate impulses, and it can directly report the exact solution.
So, in the big system we need M*N computations and in the split system we need M*(N-1). That’s great! And this strategy works for many other systems too. In most cases, we can get away cheaper by using a split solver.
But what about the graph algorithm, you may say. It is used to find the independent sets in the system. Yes, it will add some complexity. However, that computation needed is not as hard as the solve part, and it has linear complexity (with respect to the number of contacts and bodies). The solving complexity depends on the number of contacts times the number of iterations. The number of iterations should be linearly dependent on the number of contacts to be able to propagate all impulses across the system, and so we end up with a quadratic solving complexity.
There is another advantage with the split solver: the subsystems can be solved in parallel. But that is another story!
The CANNON.SplitSolver class is available in the cannon.js/dev branch, and here is a live demo where you can toggle split for a scene.
Recent Node.js development
Recently I’ve been working a lot on the Node.js version of friendship-bracelets.net. Here’s a quick status report.
Code reduction
I’ve reduced the code to less than half its size by abstracting key parts of the code and compressing a few static JS files. I really love JavaScript – abstracting code has never been easier (and dirtier).
One interesting abstraction I made was an “edit resource” page. For each resource I have a Schema class instance that can help create an HTML form and then validate input from the client.
Nginx
Other news is that I’ve started using Nginx to serve static files and proxy to Node. It was really easy to set up so I will probably continue using Nginx. The only drawback is that I really want PHPMyAdmin for administrating my database, so I still have to run an instance of Apache… Perhaps I’ll find a solution to this later on. I will post instructions about my setup when it’s stable.
Chat using Server-Sent Events
Another thing I’ve done is a chat client. I was really excited while doing this because it is a whole new concept to the site. This way we can get even closer interaction with the users.
The tech behind the real-time chat is server-sent events, or more specifically, EventEmitter in HTML5. Earlier I was determined to use WebSocket, but since server-sent events is more well supported (on both client and server) and good enough for the purpose, I went that way.
Caching of Express views
I’ve probably mentioned how to do this earlier, but now I’ve tried it. When starting the Express app with the environment variable NODE_ENV set to “production”, I simply run
app.configure("production",function(){
This makes Express cache the templates inside the app, and it makes the app faster. By also relieving the app from serving static files (using Nginx), this makes the web app perform really well. It almost feels like running the app locally when it in reality it’s on a virtual machine in a datacenter somewhere else.
app.enable("view cache");
});
Mobile app thoughts
I’ve already started using jQuery mobile for the mobile site, but it will probably only make it more difficult for me to maintain the site. Using that will need different HTML for the layout, which duplicates that amount of code. I’m starting to think maybe it’s better to just add some CSS when on mobile instead.
Express: multi-language site
A question from a user came in, and he asked if the site could be translated into russian (he even offered help). Making an Express app support multiple languages is easy using e.g. i18n-node. However, making friendship-bracelets.net in multiple languages is probably not that easy. As I see it, we have 3 options for multi-language implementation.
- Just translating the menu buttons and some of the text. This will encourage people comment in their own language (if the site is in your language, you will probably write text in your own language). I think it would be really confusing if everyone posted stuff in mixed languages.
- Separating the languages into own sites with separate content. One site will have its own set of content and the other won’t have it. Not really cool and not really a good option in this case.
- Let’s say we have two sites of different language, A and B, and they share all content. Should all content in A also be visible for users in B? For some content, yes. What if you post content in your own language site A (english) and get a comment on it in site B (russian)? It gets more complicated than this when you think about it, and there will be special rules for just about everything.
Discussing this with the mods lead to a decision to do nothing about the multi-language question. However, It would be really cool to try out i18n in the future (I never tried it before).
Fausts 20-årsjubileum
I helgen firades 20-årsdagen för F-sektionens festeri Faust. Det blev en kul helg, och många gamlingar kom till Umeå för att fira tillsammans med de nuvarande studenterna, Gamling style.
Själv tyckte jag det bästa var att få fira med de närmsta vännerna. Schneigel-kollektivet, Pelikan, och till och med Stitches tog sig till Umeå.
En video säger mer än tusen bilder.
Autocomplete som lever sitt eget liv
Sedan jag bytte till SwiftKey-tangentbordet på min telefon har jag haft goda skratt åt dess autokompletteringsfunktion. Appen föreslår nästa ord beroende på vad du skrivit tidigare. Men jag har märkt att den kan “prata för sig själv” om man om och om igen godkänner föreslaget nästa ord.
Du kan själv testa genom att trycka på mellanslagstangenten några gånger och se en mening byggas upp. Det kan bli riktigt kul!
Jag har inte fått in några utbildningar eller kurser i navigation i en vecka.
Här handlar det tydligen om navigeringskurser. Väldigt out of context då bara!
Under tiden som jag inte har någon som har en liten stund. Sen är det bara att börja med att jag inte har någon som har en liten stund.
Någon känner sig ensam och har brist på tid? Haha.
Helgen har varit på en gång i veckan. Jag har inte fått något svar. Jag har inte skrivit någon presentation. I morgon ska jag göra. Jag har inte funderat på att jag inte har någon som har en liten stund.
Låter som någon ligger efter i planeringen och mår dåligt. Usch.
Det är lite kallt. Men det är ju inte så mycket.
Håller med; mängden kyla är liten.
Låter som en bra dag. Jag har inte fått något svar. Jag har inte fått något svar. Jag har inte fått något svar. Jag har inte fått något svar.
Denna hade kunnat fortsätta i all evighet. Eko?
Öl. I am a beautiful person.
Precis såhär tänker jag då jag dricker öl. Det skulle kunna bli en slogan för en ny ölsort. Eller annan produkt. Smink? “L’Oreal – I am a beautiful person.”
It is a good idea to have a great day.
Skulle kunna bli en bra refräng till en dålig låt. Faktiskt så påminner den lite om Good Time.
The only thing that I have a good idea to have a great day.
Svårtydd mening det här. Det enda man har är en ide om hur man har en bra dag? Oavsett så tror jag det är positivt.
I am a better way to get the best way to get the best way to get the best way to get the best way.
Jag är ett bättre sätt att finna ett bättre sätt? Lol.
Det finns nog med alkohol på campus. Pungspark att skänka bort.
Håller med om det första, men varför detta våld mot genitalier?
Frågor, and I like it. Pöl, and I like it.
Det finns ingen tvivel att den här personen gillar frågor och vattenpussar.
Nu var det slut på de roliga meningarna. Vad får ni när ni trycker fram random meningar på detta sättet?
Server-side pattern rendering
I’m getting closer to the core of friendship-bracelets.net… Pattern rendering. It is the most vital part. This is what almost all users go to the site for. I’m going to make it as good as I can.
One advantage with doing this in Node.js and JavaScript is that if there is a need for client side rendering, the same code could be used. An example could be to change the colors in the pattern interactively, to check whether a set of colors are good looking together or not.
Many things are falling into place now. I want to launch a beta testing site soon but before that I need generators and server-side rendering for all types of patterns… And probably a lot of other stuff that has nothing to do with patterns at all. I hope you can wait.