Drawing bitmaps to the screen

Hi everyone, I’m making an android game, and I’m struggling to find the most efficient way to render the scene. It is going to be a top down 2d game, composed of blocks in a grid. The grid is ~ 100x100 blocks. However the player only sees part of it, like 20x10 blocks, as he moves around.

So I need to draw 200 squares every frame to the screen. I was wondering how I would go about this, considering Java is quite slow. Here are some ways I have come up with:

  1. Allocate memory for each block of the 10,000 (100x100), with vertex info, texture info, etc, and every frame, simply update the vertexes of the ones that I have to display to the screen. This is good because I don’h have to allocate things ever frame, but has a high space complexity.

  2. Every frame, look at the new blocks that I have to display to the screen, and allocate new memory to those. So If I move left, and 10 new blocks are exposed, allocate for these 10 blocks, and also change the vertices of everything I have to display.

  3. Keep a cache of allocated data for every type of block. So I would have 100 stone blocks, 100 floor blocks, 100 door blocks, etc… And simply change the vertex info for them dynamically, instead of allocating new data every time. So not a huge amount of allocation, and also no dynamic allocation.

I don’t know if I’m heading in the right direction with these. Is there a better way? Thanks. :slight_smile:

I assume that most elements in your game are static, so they don’t (or hardly) change at all. The general procedure would be this:
[ul][li]Assign data to them space-efficiently, so if you have 10000 blocks, you have 10000 times position (x, y) and texture coordinates, preferably out of a sprite sheet (x, y). Makes 40000 floats.[/li][li]When the player moves, nothing is (re-)assigned to anything. Only the offset is changed that relocates the blocks as it moves them out of (or into) the canvas (-1,-1 … 1,1). This is usually done with shaders.[/li][li]If all of those blocks would be drawn with the same draw call and only one parameter changing, e.g. the offset in some array of texture, it makes sense to use instancing for the draw call. This greatly reduces the necessary time and you can easily draw a number of elements that greatly exceeds your 10k without noticing any performance drop.[/li][/ul]
This is the procedure I used in various openGL programs, albeit in C++. For openGL ES, there should be similar routines for most of the above aspects :slight_smile:

Of course, dynamic elements can be drawn just as those above, but you have to use dynamic parameters (a bit slower).

One option is to create a 21x11 grid of blocks (i.e. large enough to fill the screen regardless of the alignment of the block grid relative to the screen) consisting of constant data, and draw it every frame. The only changes are to uniform variables; the vertex shader is responsible for generating the vertex coordinates (based upon the offset modulo the block size) and the texture coordinates (based upon the offset divided by the block size).

Another option is to create 4 roughly screen-sized (e.g. 30x15) grids of blocks. Most of the time, the data would remain constant. When a new area comes into view, replace one of the areas which is completely off-screen with the data for the new area. The areas should be slightly larger than the screen to avoid repeated updates for small movements to and fro.

Either way, the translation from world coordinates to screen coordinates is performed by the transformation in the vertex shader, not by modifying the vertex coordinates passed to OpenGL.

The latter option avoids the need to do anything non-trivial with shaders (or even to use shaders on legacy OpenGL versions). GPU load will be lower but there will be increased CPU load whenever you need to swap between areas. The former option will have more consistent performance, and will offload more of the work to the GPU.