Eskimo Next – Cleaning and Optimizing

Thanks to the ecx-benchmarks, I’ve been prodded to finally start optimizing Eskimo. The API had been stable and working nicely, and I’m comfortable with the overall structure.

So far, I have upgraded Eskimo performance nearly 100x. The ecx-benchmarks had made a special case for Eskimo because of its slow speed, and it could reach roughly 1.8k operations / second. Eskimo now can run 180k operations / second, and with some available optimizations can reach 200k operations / second. I just noticed this myself, but 1 operation is processing 1000 entities, and performing some simple additions.

I’m really happy with these results, but the new API is a bit confusing, and doesn’t guarantee that the user will use the fastest methods in accessing components.

Good-bye Entity. Originally I have always preferred this approach, but at the time I was so used to having an Entity wrapper class. With the optimized code in place, I have a clear vision to remove the Entity wrapper class, using pure Integers in Entity processing. This will eliminate the component access helpers in the Entity class, which were terribly slow. In contrast, the only ways to access components will guarantee optimized access, as well as iterating over entities will be faster. After this refactor, the performance should be guaranteed to be the fastest possible, at all times.

I had a bit of trouble accepting this decision, since Eskimo was meant to be simple and usable. But the Entity wrapper helpers aren’t worth it, and in fact, encourage an “improper” way of thinking about entities as well. I think eliminating the wrapper class is the right decision in the long run.

You can check out the new branch at: Eskimo Next

Eskimo ECS + Tundra Engine

A hard truth I’ve come to realize is I apparently like making tools more than games. I think I like making games, but my track record clearly shows otherwise.

On that note, while developing, I quickly came across some issues using hxE2 in terms of usability and consistency, the thing felt bloated and messy. Fixing bugs would spawn other bugs, functionality was split awkwardly and sometimes duplicated, etc.. Which is why I did what I like to do, and create yet another ECS, this time explicitly focusing on simplicity and lighter code.

Eskimo (https://github.com/PDeveloper/eskimo) is so far the best approach to an ECS I’ve made. All component access happens through an Entity object (unlike in hxE2, where you could do that, but also use a View object), and Views only manage a list of entities (they don’t store their own components, except for one, which is explicitly defined that way). Further, there are types of Views that enable specific features you might need, such as just a list of entities with certain components, or tracking added/updated/removed entities from that list. There has been no effort to unreasonably optimize parts of the system, and by design Views are meant to be single threaded, resulting in a clean and simple implementation. Multi-threading support happens through the base Context class, again, in a much clearer manner (though multi-threading support is only at a proof of concept stage right now). Check out the github repository for more details and usage!

Now with Eskimo in hand, I am much more confident that things will work correctly, so far it has had less bugs and I haven’t experienced any huge limitations.

Afterwards I devoted time to learning about CPPIA, the relatively new scripting feature in hxcpp. I should write a post on what I learned and my travels, but I’ll wait until my understanding solidifies.

I’m currently working on a tool called Tundra, which is both a tool and a (game) engine right now, though I’ll try to extract the coolest functionality out into a separate library: live code reloading. Currently it’s based on Snow from Snowkit, but that could be swapped for anything that supports file-watching. It scans your project.flow file, finds all source directories, and scans for classes implementing either IState or ISystem interfaces. Once found, it compiles each one into a CPPIA file, and loads it. Only 1 IState class is loaded at a time (the one that changed most recently will load), and reloading an IState causes the Eskimo ECS to be reset/cleared of all entities. ISystem classes are loaded and run without any interruption to gameplay, and with a meta tag, can be assigned to 1 of 4 System groups, input, logic, physics, or render. System groups are run in that order, but individual system order within those groups is not defined.

Despite some problems with CPPIA that causes a crash once in a while, I can leave the game window open and freely work on a game idea with no interruption, and nearly immediate results. Adding a new system is as easy as creating the class itself, and it will get picked up by Tundra to load.

My work isn’t public because I have yet to solve a few issues (I kind of broke it a bit before initializing git, what a great idea…), but once I’m comfortable with how it works, I’m excited to release it into the wild. It’s improved the speed at which I can prototype, and so hopefully for once, after many years, I’ll be able to make another game…maybe.

hxE2 + Entity Systems

[Update 2016.01.26] hxE2 is now deprecated in favour of Eskimo. If someone wants to try and fix issues, sure why not, but I’m not actively using or maintaining this!

[Update] hxE2 has been released in Alpha status here. It corresponds to the topics and improvements to hxE mentioned in this post. All feedback and comments are welcome! :)

I like talking about Entity Systems. And unlike most things I’ve talked about on this website, I have actually finished several implementations of different ECS designs.

My first one was pretty much a port from the Artemis ECS, which I’ve mentioned before, called hxE. But this design was very limited. Each System could only process 1 type of entity, and because of this, it could over complicate simple tasks like checking the distance between “Item” entities and “Inventory” entities. Ideally I would have looped over all <Item, Transform> entities, and checked the distance between all <Inventory, Transform> entities. Simple. The System limit of 1 entity type meant I needed to have 2 systems processing each type of entity, as well as some sort of communication between those systems, or further, generalize the concept of collisions which means more components…etc. etc.

[For those that didn’t understand much about what I just wrote, I recommend reading through the T-Machine blog posts on entity systems. He brings up the core concepts of how they work, as well as addressing common questions.]

Summed up, my ideology is a complete separation of Data and Logic. Systems are logic, and Components are data. What this means is that most components are PODs (Plain Old Data) with no functions, except where they are only helper functions in processing the data of that component. A System can be considered as anything that does work on components, though I usually formalize this concept as a class in most of my engines, simply to provide a common interface/reduce work required to have it running in the engine.

Moving onwards from hxE’s design, my current entity systems have the entity filtering detached from the systems. So now a System is just an empty shell with some function hooks to override (onWorldAdded, onWorldRemoved, process(delta:Float) … ). This means that a system can take in as many entity types as it needs to:

var items = new View2(world, Transform, Item);
var inventories = new View2(world, Transform, Inventory);
for (item in items.entities)
for (inventory in inventories.entities)
{
     var item_transform = items.get1(item);
     var inventory_transform = inventories.get1(inventory);
     var dx = item_transform.x - inventory_transform.x;
     var dy = item_transform.y - inventory_transform.y;
     if (dx * dx + dy * dy <= distance_to_pickup * distance_to_pickup) // add item to inventory and destroy item entity.
}
//var example = new View4(world, Transform, Mesh, Material, Input);

Not only does this make Systems much more flexible, but this also separates the concept of System, and a View into the World. At this point, System classes are helper classes rather than essential parts of the engine – a system can be anything the user wants it to be, interaction with the world is much more direct using Views.

Internally, things are setup to work well in a multithreaded environment. Each View holds a list of updates to process – once this point is made threadsafe, the rest of the interaction with the EntityWorld always happens through the View. This update pushing is also a slight concern for performance, but I will work on having performant, single threaded implementations at a later time if this happens impact projects more than I’m expecting it to.

Over the next few weeks I have a project that will be based on hxE2, and could be interesting to people using it. As I’m doing that, I’ll try to go into more details about specific parts of the hxE2 implementation, focusing on parts I get questions about etc., but for now, this is just a quick introductory post for hxE2 and my way of implementing an ECS!

Enjoi :)

P.

New stuff: NeuroPulse and Umbrak

Boy has it been a while. I have a habit of not having a habit of posting here. Lots has happened though, so I hope this will be a mildly interesting read!

First off, NeuroPulse! Me and my good friend Edib0y (that’s a zero) have been working on the prototype, and you can see what it looks like:

We have a Tumblr sort of going, but as NeuroPulse progresses, so will the blog. Important though, is the latest post, which is HERE. Read through that to get a better understanding of what NeuroPulse is and isn’t, and our hopes and plans for its future! I’ll post a few theory things and thoughts I have going on about it here, but mainly we’ll try to keep everything NeuroPulse related over there!

A bit of technical meat, NeuroPulse is based on the Ogre3d engine, using CEGUI as its GUI front-end, and an entity system as its game logic backbone. So far, everything is coming along pretty neat, though we’re in the middle of a big refactoring, which I should finish soon…but I’ve been sidetracked…

Which brings me to the mysterious name, Umbrak! 2 days ago I got the urge to make a small, little, quick, fast horror game, but as everyone who’s ever developed a game knows, that doesn’t happen. I started programming, dropped in my entity system, dropped in my screen system, but still a lot of the engine meat was incomplete. So I got started on that, but also wanted to focus as much as possible to make it easy for me to change things, add things, etc., and so I ended up with 2 fancy dev tools in the engine.

First thing I implemented was live asset reloading, which means that I can reload all dynamically loaded assets whenever. In the game engine, when it reloads graphics, it will replace those graphics as soon as they’re loaded, and so I can swap out graphics for new graphics without even restarting the level I’m testing, which should be really handy for artists who need to have that one thing look just right in that one spot. This has a ways to go in terms of optimizing the graphics, like baking all static assets onto a bitmapData, but so far so good!

Next thing I went to do was implement scripting! I used hscript, which is a scripting language for Haxe, and it looks like Haxe, and works like Haxe…so in my scripts I’m very free in what I can do, the only limit is I can’t declare new classes, and other things which are listed on the hscript gcode page. What does this mean? First off, I have a little console through which I call the asset reloading functions, world reloading, screen reloading, etc. etc.. I basically have everything accessible to me, and I plan to implement simple level editing capabilities too.

Then, I also use scripting for world initialization, as well as object creation. The world initialization declares object types, and the associated scripts it should use when finding such an object, and object scripts simply setup the entity with all the components. It has actually proven to be pretty useful and dynamic, since I can easily change the radius or brightness of a light, reload the level, and bam, see the changes applied! In fact, another thing I plan to do is make it so I can reload specific object scripts, so I don’t even need to reload a level to see changes applied to updated objects! Snazzy…

Oh, and why Umbrak? Well, I was thinking horror game, so naturally what came to mind was Penumbra. Which went to Umbra, and thought that sounded too simple so I added a “k”. Not sure if it’ll actually turn into something, but time will tell! Screenshots would be a bit pointless, since it’s basically my lighting engine patched together with NAPE physics, but if things get interesting, I’ll try and show something!

As always, if you happen to be reading this, and have a question, comment away! :)

P.

NeuroPulse! Working on the alpha.

So it’s been a long time.

I heard about Microsoft’s Imagine Cup (www.imaginecup.com), and me and a friend thought, “why not?”. So we picked up C++ and got down to it, and so far it’s been a really fun ride!

First the game idea, NeuroPulse. It’s an idea I had ages ago, but since we started thinking it through it’s changed a lot. You can think of it like a neural network simulator with an RTS built on top, or something like that. The basic idea is that there is an environment, and the way you do things is by working with the environment, and of course pushing the environment to do what you want it to do more. Nodes are the main objects in the game, where energy goes and they also emit pulses once their internal energy is over a certain threshold. Certain nodes have a reactor, so they constantly generate a small amount of energy. From this comes out a hopefully emergent behaviour of the whole system. You control nodes by building a hub on them. Around the hub you can build buildings that do stuff and modify incoming or outgoing pulses. The way you build on another node is by sending pulses with building instructions attached, to build a hub, and then afterwards you can just build more buildings easily. To capture an enemy node, you first need to overload the node, by sending lots and lots of energy. Each overload will damage all buildings on a node, until they’re destroyed, at which point you can send your build instructions to the node.

We’re using the Ogre3d engine for the display, and it has been a suuuper awesome ride with it! Really easy to set things up and get going, so we’re really happy with that. And now it’s all about tying in a GUI and some interaction with the whole system and stuff.

Next time I’ll try upping some screenshots!

P.

New game project in D + OpenGL + SDL2

Oi!

So like I said a few weeks back, I’ve started to mangle with D and OpenGL and SDL2 stuff, and it’s getting better and better all the time. Very small steps in the haxe/flash world are super gratifying in the C++/D world :D I currently have made a small, super simple 2d rendering engine I’ll use for prototyping, and ported a few of my tools from Haxe to D. Plus, I ported a variation of Artemis (http://gamadu.com/artemis/), which is an Entity System framework. So far, I’m loving Entity Systems, but I still need to figure out a few things that would have been easy to do before (nothing too serious, but just cleanly identifying which identities are enemies and who the player is etc.).

The rendering engine currently is using OpenGL immediate mode and fixed pipeline…so both modern “no-no”s, but all that matters for now is that I see something on the screen and I can start designing the actual game!

I seriously recommend anyone who is interested in doing low-level stuff, and that has done Haxe or Flash dev before, check out D. It will feel (and be) low-level, but still give you enough to work with to feel comfortable. Much more than C++ imo.

The new game project will remain “secret” for now…the main reason being that I still haven’t figured out all the details of it, and I want to post some sort of screenies or demos first to see how the gameplay could work! :) So far I’ve been calling it “Planet Ranger”.

It might be something between Minecraft/Starforge/Osmos/Civilization/SimCity? :D Not very specific I guess…

P.

Procedural Generation, you so awesome.

[UPDATE 27.01.15] I’ve recently put up a repository containing code for the following algorithms, as well as working on some other generation algorithms.

So in the last few days/week I’ve been having fun with procedural generation. Specifically 2 concepts, which are L-Systems and DLAs (Diffused Limited Aggregation).

L-Systems are very simple rules imposed on a string, or sequence of stuff. In my simple version where it uses strings, rules could be like:

(X → F-[[X]+X]+F[+FX]-X), (F → FF), which means replace X with F-[[X]+X]+F[+FX]-X, and replace F with FF.

This is the basis of L-Systems. You start with X, and iterate a few times (in this case I iterate 6 times), and the resulting string quickly builds up to something really long, and if you interpet the above rules like this afterwards:

F = draw forward

-/+ = turn left/right 25°

[/] = push/pop current state

You will get:

L System Plant
A plant generated using an L-System.

Next up is DLAs. A DLA is a fancy way of saying “simulate x amount of particles doing brownian (random) motion, and have the stick to other particles that have stuck to something else”. I’m not sure if that made any sense, but here’s the low down in more detail. You have an origin and a target boundary. In the classic case (which will be the first picture), the origin is a circular boundary, and the target is just a point in the center, and then all the particles move randomly. Once a particle touches the boundary, it freezes, and other particles can stick to frozen particles, so they basically just build up around a boundary slowly. But you can of course define whatever kind of boundary you want for both origin and target, and this can make results more interesting (the 2nd picture is a DLA with a rectangular origin boundary, and the 3rd is with a small, circular origin, and a rectangular target boundary).

I’m thinking of using DLAs to generate some maps by just post-processing the points a bit more, for example make joints more rigid by rotating them to be more at 90°, and also connecting nearby points afterwards. The library I have written currently does everything geometrically instead of using pixels and colour values like most demos I found do, so it’s more flexible in terms of what it can do and represent. I definitely suggest people to check these things out, and I just love the idea of making something really simple but having it make you something beautiful and complex.

P.

D + OpenGL = Fun

Thought I’d inform everyone on what I’m currently doing. Between working on a few games, and university, I’m starting to get into D programming as well as OpenGL and SDL.

You probably won’t have heard of D, but you can check it out here: www.dlang.org < The homepage lists a lot of it’s features!

It’s like C++, but I like to think a more modern version, as well as has several snazzy features built into it, so it’s also convenient. The whole style of it feels way more comfortable to me than when I tried getting started with C++, and…well…I can’t really explain why I like it, and perhaps that’s a bad sign, but I like learning new things, so why not!

I figured if I’m coming from an AS3 background, then D is closer to what I’m used to than C++, seeing that it has garbage collection, and from what I read a few handy functions built-in that C++ doesn’t have (yes, you can use libraries either way, but why the hassle if the feature’s in the language itself?). Even the way you import modules is with dots, which is obviously really minor, but feels more like home:

import derelict.sdl.sdl2;

And of course, learning something new is hard if you don’t have motivation, so my motivation to learn D is to also learn OpenGL and SDL2 with it (and also vice versa), and make a simple game engine I could use to make some “real” games from scratch!

I just needed to quickly wrap my head around the general working of OpenGL, and now I’m already starting to understand it’s pipeline, and how it works (I think…). I haven’t really touched SDL too much, except to open up a GL window etc.. Either way, should get interesting. (I think any programmer (but nobody else :( ) will understand the rush of excitement I got when I saw a triangle with red, green, and blue corners rotating in a black window! To know that I commanded my GPU to do such things…the power I now wield!)

Seriously check out the D programming language though! The IRC is pretty active, and has a friendly community there that will help you if you ever get stuck on something, and all in all it’s new, interesting, and without it you’ll be useless.

P.

DynaLight – Dynamic Lighting engine

Hey!

So, if you were interested in the previous post, I put up the library on GitHub!

You can check out the source there (it’s written in Haxe, but AS3 users should be able to understand most of it anyways), and also I have a .swc up for download if you are planning to use straight up AS3 with it!
Haxe users will need to install polygonal-ds ( haxelib install polygonal-ds ) to use the source!

Please check it out, and let me know if anything goes wrong, or if you have any questions!

Live Demo

P.

Dynamic 2D Lighting

[UPDATE 27.01.15] Interestingly enough, while cruising these old posts, I found a link to this post which also might have some useful explanations on the concept.

So I finally got down and dirty last night, writing a new dynamic lighting library from scratch, and really trying to do it right. Well, today I have the results! :) I will still be optimizing the lighting engine further by using memory access, which should speed up line drawing considerably, but all in all, it’s running pretty fast!

You can see a demo here: (There aren’t any actual objects, only shadows)

Dynamic Lighting Demo (Click the picture for a live demo)

It has 3 lights and 100 squares placed randomly on the stage (2 of the lights are moving). The 100 squares are technically polygons, so you can construct any polygon you want from a series of points, convex or concave, or it doesn’t even need to be a closed polygon, it can be just a series of lines.

The way I made this is actually really simple:

First I store a canvas for each light, on this canvas all the shadows for that light will be drawn, and it will be used as an alpha mask for the copyPixels() of the lights texture onto the master canvas. When drawing an object for a light, I first tell the object to draw itself onto the light’s canvas, and to return all of it’s edge points:

For each point in the polygon, I get the vectors from the light to the current point ( vc), a vector from the light to the next point ( vn), and a vector from the light to the previous point ( vp). To determine if vc is an edge point, I need to check if vectors vn and vp are on the same side of vc, or on opposite sides. I can do this by using a 2D cross product between vc and vn, and between vc and vp. If  vp and vn are on opposite sides, their cross products with vc will be of opposite signs, the same if they’re on the same side. So for a tiny bit of math:

var vc = { x: current.x - light.x, y: current.y - light.y };

var vn = { x: next.x - light.x, y: next.y - light.y };

var vp = { x: previous.x - light.x, y: previous.y - light.y };

var crossN:Number = vc.cross( vn); // cross product

var crossP:Number = vc.cross( vp);

if ( crossP * crossN >= 0) // the current point is an edge point

We can further use the result of crossN to see whether the segment we are going to draw from vc to vn is facing the light source or not. If the vertices of the polygon were specified in a clockwise order, then crossN will be negative if the segment is facing the light source, positive if it is not. In the light demo above, I only draw the segments that are facing away from the light source, so that most of the object is exposed to the light, but this can be easily changed.

After the edge points are found, I draw a line from the edge point, to the edge of the light’s canvas, essentially a “shadow” of the point from the light source. These lines combined with the lines of the object, will form a polygon of shadows on the light’s canvas. From there, we can floodFill() in the center of the canvas to fill the lighted area, and from there, copyPixels() the appropriate part of the light’s canvas to the main canvas (which would generally be the size of your screen).

Some improvements to be made are to support circles as shadow casters, as well as optimizing floodFill(), since that is the current bottleneck, not sure if that will be possible though.

I realize that I suck at explaining stuff like this, so if there are any questions, ask away! And I’ll try sticking this lighting engine in the Game Jam game this weekend, hopefully it will make stuff look a bit better. :)

P.