Category Archives: Uncategorized

Eskimo Next – Cleaning and Optimizing

Thanks to the ecx-benchmarks, I’ve been prodded to finally start optimizing Eskimo. The API had been stable and working nicely, and I’m comfortable with the overall structure.

So far, I have upgraded Eskimo performance nearly 100x. The ecx-benchmarks had made a special case for Eskimo because of its slow speed, and it could reach roughly 1.8k operations / second. Eskimo now can run 180k operations / second, and with some available optimizations can reach 200k operations / second. I just noticed this myself, but 1 operation is processing 1000 entities, and performing some simple additions.

I’m really happy with these results, but the new API is a bit confusing, and doesn’t guarantee that the user will use the fastest methods in accessing components.

Good-bye Entity. Originally I have always preferred this approach, but at the time I was so used to having an Entity wrapper class. With the optimized code in place, I have a clear vision to remove the Entity wrapper class, using pure Integers in Entity processing. This will eliminate the component access helpers in the Entity class, which were terribly slow. In contrast, the only ways to access components will guarantee optimized access, as well as iterating over entities will be faster. After this refactor, the performance should be guaranteed to be the fastest possible, at all times.

I had a bit of trouble accepting this decision, since Eskimo was meant to be simple and usable. But the Entity wrapper helpers aren’t worth it, and in fact, encourage an “improper” way of thinking about entities as well. I think eliminating the wrapper class is the right decision in the long run.

You can check out the new branch at: Eskimo Next

Procedural Generation, you so awesome.

[UPDATE 27.01.15] I’ve recently put up a repository containing code for the following algorithms, as well as working on some other generation algorithms.

So in the last few days/week I’ve been having fun with procedural generation. Specifically 2 concepts, which are L-Systems and DLAs (Diffused Limited Aggregation).

L-Systems are very simple rules imposed on a string, or sequence of stuff. In my simple version where it uses strings, rules could be like:

(X → F-[[X]+X]+F[+FX]-X), (F → FF), which means replace X with F-[[X]+X]+F[+FX]-X, and replace F with FF.

This is the basis of L-Systems. You start with X, and iterate a few times (in this case I iterate 6 times), and the resulting string quickly builds up to something really long, and if you interpet the above rules like this afterwards:

F = draw forward

-/+ = turn left/right 25°

[/] = push/pop current state

You will get:

L System Plant
A plant generated using an L-System.

Next up is DLAs. A DLA is a fancy way of saying “simulate x amount of particles doing brownian (random) motion, and have the stick to other particles that have stuck to something else”. I’m not sure if that made any sense, but here’s the low down in more detail. You have an origin and a target boundary. In the classic case (which will be the first picture), the origin is a circular boundary, and the target is just a point in the center, and then all the particles move randomly. Once a particle touches the boundary, it freezes, and other particles can stick to frozen particles, so they basically just build up around a boundary slowly. But you can of course define whatever kind of boundary you want for both origin and target, and this can make results more interesting (the 2nd picture is a DLA with a rectangular origin boundary, and the 3rd is with a small, circular origin, and a rectangular target boundary).

I’m thinking of using DLAs to generate some maps by just post-processing the points a bit more, for example make joints more rigid by rotating them to be more at 90°, and also connecting nearby points afterwards. The library I have written currently does everything geometrically instead of using pixels and colour values like most demos I found do, so it’s more flexible in terms of what it can do and represent. I definitely suggest people to check these things out, and I just love the idea of making something really simple but having it make you something beautiful and complex.


DynaLight – Dynamic Lighting engine


So, if you were interested in the previous post, I put up the library on GitHub!

You can check out the source there (it’s written in Haxe, but AS3 users should be able to understand most of it anyways), and also I have a .swc up for download if you are planning to use straight up AS3 with it!
Haxe users will need to install polygonal-ds ( haxelib install polygonal-ds ) to use the source!

Please check it out, and let me know if anything goes wrong, or if you have any questions!

Live Demo


Dynamic 2D Lighting

[UPDATE 27.01.15] Interestingly enough, while cruising these old posts, I found a link to this post which also might have some useful explanations on the concept.

So I finally got down and dirty last night, writing a new dynamic lighting library from scratch, and really trying to do it right. Well, today I have the results! :) I will still be optimizing the lighting engine further by using memory access, which should speed up line drawing considerably, but all in all, it’s running pretty fast!

You can see a demo here: (There aren’t any actual objects, only shadows)

Dynamic Lighting Demo (Click the picture for a live demo)

It has 3 lights and 100 squares placed randomly on the stage (2 of the lights are moving). The 100 squares are technically polygons, so you can construct any polygon you want from a series of points, convex or concave, or it doesn’t even need to be a closed polygon, it can be just a series of lines.

The way I made this is actually really simple:

First I store a canvas for each light, on this canvas all the shadows for that light will be drawn, and it will be used as an alpha mask for the copyPixels() of the lights texture onto the master canvas. When drawing an object for a light, I first tell the object to draw itself onto the light’s canvas, and to return all of it’s edge points:

For each point in the polygon, I get the vectors from the light to the current point ( vc), a vector from the light to the next point ( vn), and a vector from the light to the previous point ( vp). To determine if vc is an edge point, I need to check if vectors vn and vp are on the same side of vc, or on opposite sides. I can do this by using a 2D cross product between vc and vn, and between vc and vp. If  vp and vn are on opposite sides, their cross products with vc will be of opposite signs, the same if they’re on the same side. So for a tiny bit of math:

var vc = { x: current.x - light.x, y: current.y - light.y };

var vn = { x: next.x - light.x, y: next.y - light.y };

var vp = { x: previous.x - light.x, y: previous.y - light.y };

var crossN:Number = vc.cross( vn); // cross product

var crossP:Number = vc.cross( vp);

if ( crossP * crossN >= 0) // the current point is an edge point

We can further use the result of crossN to see whether the segment we are going to draw from vc to vn is facing the light source or not. If the vertices of the polygon were specified in a clockwise order, then crossN will be negative if the segment is facing the light source, positive if it is not. In the light demo above, I only draw the segments that are facing away from the light source, so that most of the object is exposed to the light, but this can be easily changed.

After the edge points are found, I draw a line from the edge point, to the edge of the light’s canvas, essentially a “shadow” of the point from the light source. These lines combined with the lines of the object, will form a polygon of shadows on the light’s canvas. From there, we can floodFill() in the center of the canvas to fill the lighted area, and from there, copyPixels() the appropriate part of the light’s canvas to the main canvas (which would generally be the size of your screen).

Some improvements to be made are to support circles as shadow casters, as well as optimizing floodFill(), since that is the current bottleneck, not sure if that will be possible though.

I realize that I suck at explaining stuff like this, so if there are any questions, ask away! And I’ll try sticking this lighting engine in the Game Jam game this weekend, hopefully it will make stuff look a bit better. :)


Engine guts.

Since I’ve already been working on my engine for a few months of on/off, lazy work, I’ll just give a small update on what’s going on inside up until now.

My first task was doing collision detection/response of course. The engine is based on circles and lines, with no support for more complicated figures. My goal for a future version is to have better collision response and more dynamic detection (I’ve only got the bare minimum to keep stuff from going through walls).
One thing I’m happy about is that the calculations are totally perfect. It’s impossible to have tunneling with any figure as collision detection is done by trimming a shapes movement vector until it no longer collides with anything (there’s a simple resolution phase where it pushes shapes out of other shapes automagically).
There’s a problem with squeezing, where if you try to move a circle between 2 lines that eventually become too narrow for it to pass through, the circle will go over one edge (to do with the resolution phase mainly).

After I was happy with my collision engine, I started doing things in the wrong order, and setup my lighting engine. I had an old prototype of this sitting on my desktop, so getting it up and running was a matter of extending old code. New features in this engine though are the gradients (pre-rendered bitmaps), and circle shadow support. I plan to give the zombie game a heavy horror influence, so lighting as a result will play a considerable role in the gameplay (zombie AI will respond to the player’s flashlight).

Parallel with lighting, I began working on the zombie AI (which is about half done, depending on how many more fps I’ll be able to squeeze out; future plans are having them react to light, and react to each other (if one sees you, then other zombies who see that zombie will follow him)). The zombie AI is quite simple. It depends on a node network, which isn’t my vision for an ideal AI, but it will do for this game. Wandering is simple: Zombie finds his nearest node. If that node is far away, he first travels to that node, otherwise directly proceeds to the next step, which is finding a random connecting node (this is one thing I plan to change, or at least constrain in some way just so that the movement doesn’t seem so random).
As soon as the player is in sight, the zombie will directly follow him. Upon losing sight, the zombie subtracts the players movement vector from his last position that was seen, and goes there. Then, it finds the closest node to that point. If that node is nearby, the zombie proceeds to the next step, otherwise adds the node to the zombies target list. Next, it compares all the node’s connecting nodes with the player’s last seen movement vector, so that it follows the player somewhat. The next step which I haven’t implemented yet is for the zombie to further add the nearest node from his last node, since when running away, players will usually take many short paths to lose sight of the zombie as quickly as possible.

So far that’s what’s going on. With shooting I’m just raytracing through the obstacles and zombies, and adding another light source over the gun.
I can’t promise frequent updates, but upon serious feature adds, I’ll most probably write up a blog post explaining them.

Enjoi the alpha engine!

Blog is up!


This blog, and website in general, is just a place where I will post stuff I’m doing, and most of all, talk about/show game design and AI concepts. I’m currently working on a Zombie Shooter, which will hopefully be up soon enough, just gotta get it to the Alpha stage…