Tag Archives: Games

Android Developer Story: Zabob Studio and Buff Studio reach global users with Google Play

Posted by Lily Sheringham, Google Play team

South Korean Games developers Zabob Studio and Buff Studio are start-ups seeking to become major players in the global mobile games industry.

Established in 2013, Zabob Studio was set up by Kwon Dae-hyeon and his wife in 2013. This couple-run business but they have already published ten games, including hits ‘Zombie Judgement Day’ and ‘Infinity Dungeon.’ So far, the company has generated more than KRW ₩140M (approximately $125,000 USD) in sales revenue, with about 60 percent of the studio’s downloads coming from international markets, such as Taiwan and Brazil.

Elsewhere, Buff Studio was founded in 2014 and right from the start, its first game Buff Knight was an instant hit. It was even featured as the ‘Game of the Week’ on Google Play and was included in “30 Best Games of 2014” lists. A sequel is already in the works showing the potential of the franchise.

In this video, Kwon Dae-hyeon, CEO of Zabob Studio ,and Kim Do-Hyeong, CEO of Buff Studio, talk about how Google Play services and the Google Play Developer Console have helped them maintain a competitive edge, market their games efficiently to global users and grow revenue on the platform.

Android Developer Story: Buff Studio - Reaching global users with Google Play

Android Developer Story: Zabob Studio - Growing revenue with Google Play

Check Zabob Studio apps and Buff Knight on Google Play!

We’re pleased to share that Android Developer Stories will now come with translated subtitles on YouTube in popular languages around the world. Find out how to turn on YouTube captions. To read locally translated blog posts, visit the Google developer blog in Korean.

Game Performance: Data-Oriented Programming

Posted by Shanee Nishry, Game Developer Advocate

To improve game performance, we’d like to highlight a programming paradigm that will help you maximize your CPU potential, make your game more efficient, and code smarter.

Before we get into detail of data-oriented programming, let’s explain the problems it solves and common pitfalls for programmers.

Memory

The first thing a programmer must understand is that memory is slow and the way you code affects how efficiently it is utilized. Inefficient memory layout and order of operations forces the CPU idle waiting for memory so it can proceed doing work.

The easiest way to demonstrate is by using an example. Take this simple code for instance:

char data[1000000]; // One Million bytes
unsigned int sum = 0;

for ( int i = 0; i < 1000000; ++i )
{
  sum += data[ i ];
}

An array of one million bytes is declared and iterated on one byte at a time. Now let's change things a little to illustrate the underlying hardware. Changes marked in bold:

char data[16000000]; // Sixteen Million bytes
unsigned int sum = 0;

for ( int i = 0; i < 16000000; i += 16 )
{
  sum += data[ i ];
}

The array is changed to contain sixteen million bytes and we iterate over one million of them, skipping 16 at a time.

A quick look suggests there shouldn't be any effect on performance as the code is translated to the same number of instructions and runs the same number of times, however that is not the case. Here is the difference graph. Note that this is on a logarithmic scale--if the scale were linear, the performance difference would be too large to display on any reasonably-sized graph!


Graph in logarithmic scale

The simple change making the loop skip 16 bytes at a time makes the program run 5 times slower!

The average difference in performance is 5x and is consistent when iterating 1,000 bytes up to a million bytes, sometimes increasing up to 7x. This is a serious change in performance.

Note: The benchmark was run on multiple hardware configurations including a desktop with Intel 5930K 3.50GHz CPU, a Macbook Pro Retina laptop with 2.6 GHz Intel i7 CPU and Android Nexus 5 and Nexus 6 devices. The results were pretty consistent.

If you wish to replicate the test, you might have to ensure the memory is out of the cache before running the loop because some compilers will cache the array on declaration. Read below to understand more on how it works.

Explanation

What happens in the example is quite simply explained when you understand how the CPU accesses data. The CPU can’t access data in RAM; the data must be copied to the cache, a smaller but extremely fast memory line which resides near the CPU chip.

When the program starts, the CPU is set to run an instruction on part of the array but that data is still not in the cache, therefore causing a cache miss and forcing the CPU to wait for the data to be copied into the cache.

For simplicity sake, assume a cache size of 16 bytes for the L1 cache line, this means 16 bytes will be copied starting from the requested address for the instruction.

In the first code example, the program next tries to operate on the following byte, which is already copied into the cache following the initial cache miss, therefore continuing smoothly. This is also true for the next 14 bytes. After 16 bytes, since the first cache miss the loop, will encounter another cache miss and the CPU will again wait for data to operate on, copying the next 16 bytes into the cache.

In the second code sample, the loop skips 16 bytes at a time but hardware continues to operate the same. The cache copies the 16 subsequent bytes each time it encounters a cache miss which means the loop will trigger a cache miss with each iteration and cause the CPU to wait idle for data each time!

Note: Modern hardware implements cache prefetch algorithms to prevent incurring a cache miss per frame, but even with prefetching, more bandwidth is used and performance is lower in our example test.

In reality the cache lines tend to be larger than 16 bytes, the program would run much slower if it were to wait for data at every iteration. A Krait-400 found in the Nexus 5 has a L0 data cache of 4 KB with 64 Bytes per line.

If you are wondering why cache lines are so small, the main reason is that making fast memory is expensive.

Data-Oriented Design

The way to solve such performance issues is by designing your data to fit into the cache and have the program to operate on the entire data continuously.

This can be done by organizing your game objects inside Structures of Arrays (SoA) instead of Arrays of Structures (AoS) and pre-allocating enough memory to contain the expected data.

For example, a simple physics object in an AoS layout might look like this:

struct PhysicsObject
{
  Vec3 mPosition;
  Vec3 mVelocity;

  float mMass;
  float mDrag;
  Vec3 mCenterOfMass;

  Vec3 mRotation;
  Vec3 mAngularVelocity;

  float mAngularDrag;
};

This is a common way way to present an object in C++.

On the other hand, using SoA layout looks more like this:

class PhysicsSystem
{
private:
  size_t mNumObjects;
  std::vector< Vec3 > mPositions;
  std::vector< Vec3 > mVelocities;
  std::vector< float > mMasses;
  std::vector< float > mDrags;

  // ...
};

Let’s compare how a simple function to update object positions by their velocity would operate.

For the AoS layout, a function would look like this:

void UpdatePositions( PhysicsObject* objects, const size_t num_objects, const float delta_time )
{
  for ( int i = 0; i < num_objects; ++i )
  {
    objects[i].mPosition += objects[i].mVelocity * delta_time;
  }
}

The PhysicsObject is loaded into the cache but only the first 2 variables are used. Being 12 bytes each amounts to 24 bytes of the cache line being utilised per iteration and causing a cache miss with every object on a 64 bytes cache line of a Nexus 5.

Now let’s look at the SoA way. This is our iteration code:

void PhysicsSystem::SimulateObjects( const float delta_time )
{
  for ( int i = 0; i < mNumObjects; ++i )
  {
    mPositions[ i ] += mVelocities[i] * delta_time;
  }
}

With this code, we immediately cause 2 cache misses, but we are then able to run smoothly for about 5.3 iterations before causing the next 2 cache misses resulting in a significant performance increase!

The way data is sent to the hardware matters. Be aware of data-oriented design and look for places it will perform better than object-oriented code.

We have barely scratched the surface. There is still more to data-oriented programming than structuring your objects. For example, the cache is used for storing instructions and function memory so optimizing your functions and local variables affects cache misses and hits. We also did not mention the L2 cache and how data-oriented design makes your application easier to multithread.

Make sure to profile your code to find out where you might want to implement data-oriented design. You can use different profilers for different architecture, including the NVIDIA Tegra System Profiler, ARM Streamline Performance Analyzer, Intel and PowerVR PVRMonitor.

If you want to learn more on how to optimize for your cache, read on cache prefetching for various CPU architectures.

Game Performance: Geometry Instancing

Posted by Shanee Nishry, Games Developer Advocate

Imagine a beautiful virtual forest with countless trees, plants and vegetation, or a stadium with countless people in the crowd cheering. If you are heroic you might like the idea of an epic battle between armies.

Rendering a lot of meshes is desired to create a beautiful scene like a forest, a cheering crowd or an army, but doing so is quite costly and reduces the frame rate. Fortunately this is possible using a simple technique called Geometry Instancing.

Geometry instancing can be used in 2D games for rendering a large number of sprites, or in 3D for things like particles, characters and environment.

The NDK code sample More Teapots demoing the content of this article can be found with the ndk inside the samples folder and in the git repository.

Support and Extensions

Geometry instancing is available from OpenGL ES 3.0 and to OpenGL 2.0 devices which support the GL_NV_draw_instanced or GL_EXT_draw_instanced extensions. More information on how to using the extensions is shown in the More Teapots demo.

Overview

Submitting draw calls causes OpenGL to queue commands to be sent to the GPU, this has an expensive overhead which may affect performance. This overhead grows when changing states such as alpha blending function, active shader, textures and buffers.

Geometry Instancing is a technique that combines multiple draws of the same mesh into a single draw call, resulting in reduced overhead and potentially increased performance. This works even when different transformations are required.

The algorithm

To explain how Geometry Instancing works let’s quickly overview traditional drawing.

Traditional Drawing

To a draw a mesh you’d usually prepare a vertex buffer and an index buffer, bind your shader and buffers, set your uniforms such as a World View Projection matrix and make a draw call.

To draw multiple instances using the same mesh you set new uniform values for the transformations and other data and call draw again. This is repeated for every instance.

Drawing with Geometry Instancing

Geometry Instancing reduces CPU overhead by reducing the sequence described above into a single buffer and draw call.

It works by using an additional buffer which contains custom per-instance data needed by your shader, such as transformations, color, light data.

The first change to your workflow is to create the additional buffer on initialization stage.

To put it into code let’s define an example per-instance data that includes a world view projection matrix and a color:

C++

struct PerInstanceData
{
 Mat4x4 WorldViewProj;
 Vector4 Color;
};

You also need to the structure to your shader. The easiest way is by creating a Uniform Block with an array:

GLSL

#define MAX_INSTANCES 512

layout(std140) uniform PerInstanceData {
    struct
    {
        mat4      uMVP;
        vec4      uColor;
    } Data[ MAX_INSTANCES ];
};

Note that uniform blocks have limited sizes. You can find the maximum number of bytes you can use by querying for GL_MAX_UNIFORM_BLOCK_SIZE using glGetIntegerv.

Example:

GLint max_block_size = 0;
glGetIntegerv( GL_MAX_UNIFORM_BLOCK_SIZE, &max_block_size );

Bind the uniform block on the CPU in your program’s initialization stage:

C++

#define MAX_INSTANCES 512
#define BINDING_POINT 1
GLuint shaderProgram; // Compiled shader program

// Bind Uniform Block
GLuint blockIndex = glGetUniformBlockIndex( shaderProgram, "PerInstanceData" );
glUniformBlockBinding( shaderProgram, blockIndex, BINDING_POINT );

And create a corresponding uniform buffer object:

C++

// Create Instance Buffer
GLuint instanceBuffer;

glGenBuffers( 1, &instanceBuffer );
glBindBuffer( GL_UNIFORM_BUFFER, instanceBuffer );
glBindBufferBase( GL_UNIFORM_BUFFER, BINDING_POINT, instanceBuffer );

// Initialize buffer size
glBufferData( GL_UNIFORM_BUFFER, MAX_INSTANCES * sizeof( PerInstanceData ), NULL, GL_DYNAMIC_DRAW );

The next step is to update the instance data every frame to reflect changes to the visible objects you are going to draw. Once you have your new instance buffer you can draw everything with a single call to glDrawElementsInstanced.

You update the instance buffer using glMapBufferRange. This function locks the buffer and retrieves a pointer to the byte data allowing you to copy your per-instance data. Unlock your buffer using glUnmapBuffer when you are done.

Here is a simple example for updating the instance data:

const int NUM_SCENE_OBJECTS = …; // number of objects visible in your scene which share the same mesh

// Bind the buffer
glBindBuffer( GL_UNIFORM_BUFFER, instanceBuffer );

// Retrieve pointer to map the data
PerInstanceData* pBuffer = (PerInstanceData*) glMapBufferRange( GL_UNIFORM_BUFFER, 0,
                NUM_SCENE_OBJECTS * sizeof( PerInstanceData ),
                GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_RANGE_BIT );

// Iterate the scene objects
for ( int i = 0; i < NUM_SCENE_OBJECTS; ++i )
{
    pBuffer[ i ].WorldViewProj = ... // Copy World View Projection matrix
    pBuffer[ i ].Color = …               // Copy color
}

glUnmapBuffer( GL_UNIFORM_BUFFER ); // Unmap the buffer

And finally you can draw everything with a single call to glDrawElementsInstanced or glDrawArraysInstanced (depending if you are using an index buffer):

glDrawElementsInstanced( GL_TRIANGLES, NUM_INDICES, GL_UNSIGNED_SHORT, 0,
                NUM_SCENE_OBJECTS );

You are almost done! There is just one more step to do. In your shader you need to make use of the new uniform buffer object for your transformations and colors. In your shader main program:

void main()
{
    …
    gl_Position = PerInstanceData.Data[ gl_InstanceID ].uMVP * inPosition;
    outColor = PerInstanceData.Data[ gl_InstanceID ].uColor;
}

You might have noticed the use gl_InstanceID. This is a predefined OpenGL vertex shader variable that tells your program which instance it is currently drawing. Using this variable your shader can properly iterate the instance data and match the correct transformation and color for every vertex.

That’s it! You are now ready to use Geometry Instancing. If you are drawing the same mesh multiple times in a frame make sure to implement Geometry Instancing in your pipeline! This can greatly reduce overhead and improve performance.

Android Developer Story: Wooga’s fast iterations on Android and Google Play

Posted by Leticia Lago, Google Play team

In order to make the best possible games, Wooga works on roughly 40 concepts and prototypes per year, out of which 10 go into production, around seven soft launch, and only two make it to global launch. It’s what they call “the hit filter." For their latest title, Agent Alice, they follow up with new episodes every week to maintain player interest and engagement over time.

The ability to quickly iterate both live and under development games is therefore key to Wooga’s business model — Android and Google Play provide them the tools they need and mean that new features and updates are made on Android first, before they get to other platforms.

Find out more from Sebastian Kriese, Head of Partnerships, and Pal Tamas Feher, Head of Engineering, and learn how the iteration features of Android and Google Play have contributed to successes such as Diamond Dash, Jelly Splash, and Agent Alice.

You can find out more about building successful games businesses on Android and Google Play at Google I/O 2015: in person, on the live stream, or session recordings after the event. Check out the following:

  • Developers connecting the world through Google Play - Hear how the new mobile ecosystem including Google Play and Android are empowering developers to make good on the dream of connecting the world through technology to improve people's lives. This session will be live streamed.
  • Growing games with Google — In addition to consoles, PC, and browser gaming, as well as phone and tablet games, there are emerging fields including virtual reality and mobile games in the living room. This talk covers how Google is helping developers across this broad range of platforms. This session will be live streamed.
  • What’s new in the Google Play Developer Console - Google Play’s new launches will help you acquire more users and improve the quality of your app. Hear an overview of the latest features and how you can start taking advantage of them in the Developer Console.
  • Smarter approaches to app testing — Hear about the new ways Google can help maximize the success of your next app launch with cheaper and easier testing strategies.

Game Performance: Explicit Uniform Locations

Posted by Shanee Nishry, Games Developer Advocate

Uniforms variables in GLSL are crucial for passing data between the game code on the CPU and the shader program on the graphics card. Unfortunately, up until the availability of OpenGL ES 3.1, using uniforms required some preparation which made the workflow slightly more complicated and wasted time during loading.

Let us examine a simple vertex shader and see how OpenGL ES 3.1 allows us to improve it:

#version 300 es

layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;

uniform mat4 matWorldViewProjection;

out vec2 outTexCoord;

void main()
{
    outTexCoord = vertexUV;
    gl_Position = matWorldViewProjection * vertexPosition;
}

Note: You might be familiar with this shader from a previous Game Performance article on Layout Qualifiers. Find it here.

We have a single uniform for our world view projection matrix:

uniform mat4 matWorldViewProjection;

The inefficiency appears when you want to assign the uniform value.

You need to use glUniformMatrix4fv or glUniform4f to set the uniform’s value but you also need the handle for the uniform’s location in the program. To get the handle you must call glGetUniformLocation.

GLuint program; // the shader program
float matWorldViewProject[16]; // 4x4 matrix as float array

GLint handle = glGetUniformLocation( program, “matWorldViewProjection” );
glUniformMatrix4fv( handle, 1, false, matWorldViewProject );

That pattern leads to having to call glGetUniformLocation for each uniform in every shader and keeping the handles or worse, calling glGetUniformLocation every frame.

Warning! Never call glGetUniformLocation every frame! Not only is it bad practice but it is slow and bad for your game’s performance. Always call it during initialization and save it somewhere in your code for use in the render loop.

This process is inefficient, it requires you to do more work and costs precious time and performance.

Also take into consideration that you might have multiple shaders with the same uniforms. It would be much better if your code was deterministic and the shader language allowed you to explicitly set the locations of your uniforms so you don’t need to query and manage access handles. This is now possible with Explicit Uniform Locations.

You can set the location for uniforms directly in the shader’s code. They are declared like this

layout(location = index) uniform type name;

For our example shader it would be:

layout(location = 0) uniform mat4 matWorldViewProjection;

This means you never need to use glGetUniformLocation again, resulting in simpler code, initialization process and saved CPU cycles.

This is how the example shader looks after the change. Changes are marked in bold:

#version 310 es

layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;

layout(location = 0) uniform mat4 matWorldViewProjection;

out vec2 outTexCoord;

void main()
{
    outTexCoord = vertexUV;
    gl_Position = matWorldViewProjection * vertexPosition;
}

As Explicit Uniform Locations are only supported from OpenGL ES 3.1 we also changed the version declaration to 310.

Now all you need to do to set your matWorldViewProjection uniform value is call glUniformMatrix4fv for the handle 0:

const GLint UNIFORM_MAT_WVP = 0; // Uniform location for WorldViewProjection
float matWorldViewProject[16]; // 4x4 matrix as float array

glUniformMatrix4fv( UNIFORM_MAT_WVP, 1, false, matWorldViewProject );

This change is extremely simple and the improvements can be substantial, producing cleaner code, asset pipeline and improved performance. Be sure to make these changes If you are targeting OpenGL ES 3.1 or creating multiple APKs to support a wide range of devices.

To learn more about Explicit Uniform Locations check out the OpenGL wiki page for it which contains valuable information on different layouts and how arrays are represented.

Power Great Gaming with New Analytics from Play Games

By Ben Frenkel, Google Play Games team

A few weeks ago at the Game Developers Conference (GDC), we announced Play Games Player Analytics, a new set of free reports to help you manage your games business and understand in-game player behavior. Today, we’re excited to make these new tools available to you in the Google Play Developer Console.

Analytics is a key component of running a game as a service, which is increasingly becoming a necessity for running a successful mobile gaming business. When you take a closer look at large developers that do this successfully, you find that they do three things really well:

  • Manage their business to revenue targets
  • Identify hot spots in their business metrics so they can continuously focus on the game updates that will drive the most impact
  • Use analytics to understand how players are progressing, spending, and churning

“With player engagement and revenue data living under one roof, developers get a level of data quality that is simply not available to smaller teams without dedicated staff. As the tools evolve, I think Google Play Games Player Analytics will finally allow indie devs to confidently make data-driven changes that actually improve revenue.”

Kevin Pazirandeh
Developer of Zombie Highway 2

With Player Analytics, we wanted to make these capabilities available to the entire developer ecosystem on Google Play in a frictionless, easy-to-use way, freeing up your precious time to create great gaming experiences. Small studios, including the makers of Zombie Highway 2 and Bombsquad, have already started to see the benefits and impact of Player Analytics on their business.

Further, if you integrate with Google Play game services, you get this set of analytics with no incremental effort. But, for a little extra work, you can also unlock another set of high impact reports by integrating Google Play game services Events, starting with the Sources and Sinks report, a report to help you balance your in-game economy.

If you already have a game integrated with Google Play game services, go check out the new reports in the Google Play Developer Console today. For everyone else, enabling Player Analytics is as simple as adding a handful of lines of code to your game to integrate Google Play game services.

Manage your business to revenue targets

Set your spend target in Player Analytics by choosing a daily goal

To help assess the health of your games business, Player Analytics enables you to select a daily in-app purchase revenue target and then assess how you're doing against that goal through the Target vs Actual report depicted below. Learn more.

Identify hot spots using benchmarks with the Business Drivers report

Ever wonder how your game’s performance stacks up against other games? Player Analytics tells you exactly how well you are doing compared to similar games in your category.

Metrics highlighted in red are below the benchmark. Arrows indicate whether a metric is trending up or down, and any cell with the icon can be clicked to see more details about the underlying drivers of the change. Learn more.

Track player retention by new user cohort

In the Retention report, you can see the percentage of players that continued to play your game on the following seven days after installing your game.

Learn more.

See where players are spending their time, struggling, and churning with the Player Progression report

Measured by the number of achievements players have earned, the Player Progression funnel helps you identify where your players are struggling and churning to help you refine your game and, ultimately, improve retention. Add more achievements to make progression tracking more precise.

Learn more.

Manage your in-game economy with the Sources and Sinks report

The Sources and Sinks report helps you balance your in-game economy by showing the relationship between how quickly players are earning or buying and using resources.

For example, Eric Froemling, one man developer of BombSquad, used the Sources & Sinks report to help balance the rate at which players earned and spent tickets.

Read more about Eric’s experience with Player Analytics in his recent blog post.

To enable the Sources and Sinks report you will need to create and integrate Play game services Events that track sources of premium currency (e.g., gold coins earned), and sinks of premium currency (e.g., gold coins spent to buy in-app items).

Game Performance: Layout Qualifiers

Today, we want to share some best practices on using the OpenGL Shading Language (GLSL) that can optimize the performance of your game and simplify your workflow. Specifically, Layout qualifiers make your code more deterministic and increase performance by reducing your work.

Let’s start with a simple vertex shader and change it as we go along.

This basic vertex shader takes position and texture coordinates, transforms the position and outputs the data to the fragment shader:
attribute vec4 vertexPosition;
attribute vec2 vertexUV;

uniform mat4 matWorldViewProjection;

varying vec2 outTexCoord;

void main()
{
  outTexCoord = vertexUV;
  gl_Position = matWorldViewProjection * vertexPosition;
}

Vertex Attribute Index

To draw a mesh on to the screen, you need to create a vertex buffer and fill it with vertex data, including positions and texture coordinates for this example.

In our sample shader, the vertex data may be laid out like this:
struct Vertex
{
  Vector4 Position;
  Vector2 TexCoords;
};
Therefore, we defined our vertex shader attributes like this:
attribute vec4 vertexPosition;
attribute vec2  vertexUV;
To associate the vertex data with the shader attributes, a call to glGetAttribLocation will get the handle of the named attribute. The attribute format is then detailed with a call to glVertexAttribPointer.
GLint handleVertexPos = glGetAttribLocation( myShaderProgram, "vertexPosition" );
glVertexAttribPointer( handleVertexPos, 4, GL_FLOAT, GL_FALSE, 0, 0 );

GLint handleVertexUV = glGetAttribLocation( myShaderProgram, "vertexUV" );
glVertexAttribPointer( handleVertexUV, 2, GL_FLOAT, GL_FALSE, 0, 0 );
But you may have multiple shaders with the vertexPosition attribute and calling glGetAttribLocation for every shader is a waste of performance which increases the loading time of your game.

Using layout qualifiers you can change your vertex shader attributes declaration like this:
layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;
To do so you also need to tell the shader compiler that your shader is aimed at GL ES version 3.1. This is done by adding a version declaration:
#version 300 es
Let’s see how this affects our shader, changes are marked in bold:
#version 300 es

layout(location = 0) in vec4 vertexPosition;
layout(location = 1) in vec2 vertexUV;

uniform mat4 matWorldViewProjection;

out vec2 outTexCoord;

void main()
{
  outTexCoord = vertexUV;
  gl_Position = matWorldViewProjection * vertexPosition;
}
Note that we also changed outTexCoord from varying to out. The varying keyword is deprecated from version 300 es and requires changing for the shader to work.

Note that Vertex Attribute qualifiers and #version 300 es are supported from OpenGL ES 3.0. The desktop equivalent is supported on OpenGL 3.3 and using #version 330.

Now you know your position attributes always at 0 and your texture coordinates will be at 1 and you can now bind your shader format without using glGetAttribLocation:
const int ATTRIB_POS = 0;
const int ATTRIB_UV   = 1;

glVertexAttribPointer( ATTRIB_POS, 4, GL_FLOAT, GL_FALSE, 0, 0 );
glVertexAttribPointer( ATTRIB_UV, 2, GL_FLOAT, GL_FALSE, 0, 0 );
This simple change leads to a cleaner pipeline, simpler code and saved performance during loading time.

To learn more about performance on Android, check out the Android Performance Patterns series.

Posted by Shanee Nishry, Games Developer Advocate

We’ll see you at GDC 2015!

Posted by Greg Hartrell, Senior Product Manager of Google Play Games

The Game Developers Conference (GDC) is less than one week away in San Francisco. This year we will host our annual Developer Day at West Hall and be on the Expo floor in booth #502. We’re excited to give you a glimpse into how we are helping mobile game developers build successful businesses and improve user experiences.

Our Developer Day will take place in Room 2006 of the West Hall of Moscone Center on Monday, March 2. We're keeping the content action-oriented with a few presentations and lightning talks, followed by a full afternoon of hands on hacking with Google engineers. Here’s a look at the schedule:

Opening Keynote || 10AM: We’ll kick off the day by sharing to make your games more successful with Google. You’ll hear about new platforms, new tools to make development easier, and ways to measure your mobile games and monetize them.

Running A Successful Games Business with Google || 10:30AM: Next we’ll hear from Bob Meese, the Global Head of Games Business Development from Google Play, who’ll offer some key pointers on how to make sure you're best taking advantage of unique tools on Google Play to grow your business effectively.

Lightning Talks || 11:15AM: Ready to absorb all the opportunities Google has to offer your game business? These quick, 5-minute talks will cover everything from FlatBuffers to Google Cast to data interpolation. To keep us on track, a gong may be involved.

Code Labs || 1:30PM: After lunch, we’ll turn the room into a classroom setting where you can participate in a number of self-guided code labs focused on leveraging Analytics, Google Play game services, Firebase and VR with Cardboard. These Code Labs are completely self-paced and will be available throughout the afternoon. If you want admission to the code labs earlier, sign up for Priority Access here!

Also, be sure to check out the Google booth on the Expo floor to get hands on experiences with Project Tango, Niantic Labs and Cardboard starting on Wednesday, March 4. Our teams from AdMob, AdWords, Analytics, Cloud Platform and Firebase will also be available to answer any of your product questions.

For more information on our presence at GDC, including a full list of our talks and speaker details, please visit g.co/dev/gdc2015. Please note that these events are part of the official Game Developer's Conference, so you will need a pass to attend. If you can't attend GDC in person, you can still check out our morning talks on our livestream at g.co/dev/gdc-livestream.

Efficient Game Textures with Hardware Compression

Posted by Shanee Nishry, Developer Advocate

As you may know, high resolution textures contribute to better graphics and a more impressive game experience. Adaptive Scalable Texture Compression (ASTC) helps solve many of the challenges involved including reducing memory footprint and loading time and even increase performance and battery life.

If you have a lot of textures, you are probably already compressing them. Unfortunately, not all compression algorithms are made equal. PNG, JPG and other common formats are not GPU friendly. Some of the highest-quality algorithms today are proprietary and limited to certain GPUs. Until recently, the only broadly supported GPU accelerated formats were relatively primitive and produced poor results.

With the introduction of ASTC, a new compression technique invented by ARM and standardized by the Khronos group, we expect to see dramatic changes for the better. ASTC promises to be both high quality and broadly supported by future Android devices. But until devices with ASTC support become widely available, it’s important to understand the variety of legacy formats that exist today.

We will examine preferable compression formats which are supported on the GPU to help you reduce .apk size and loading times of your game.

Texture Compression

Popular compressed formats include PNG and JPG, which can’t be decoded directly by the GPU. As a consequence, they need to be decompressed before copying them to the GPU memory. Decompressing the textures takes time and leads to increased loading times.

A better option is to use hardware accelerated formats. These formats are lossy but have the advantage of being designed for the GPU.

This means they do not need to be decompressed before being copied and result in decreased loading times for the player and may even lead to increased performance due to hardware optimizations.

Hardware Accelerated Formats

Hardware accelerated formats have many benefits. As mentioned before, they help improve loading times and the runtime memory footprint.

Additionally, these formats help improve performance, battery life and reduce heating of the device, requiring less bandwidth while also consuming less energy.

There are two categories of hardware accelerated formats, standard and proprietary. This table shows the standard formats:

ETC1 Supported on all Android devices with OpenGL ES 2.0 and above. Does not support alpha channel.
ETC2 Requires OpenGL ES 3.0 and above.
ASTC Higher quality than ETC1 and ETC2. Supported with the Android Extension Pack.

As you can see, with higher OpenGL support you gain access to better formats. There are proprietary formats to replace ETC1, delivering higher quality and alpha channel support. These are shown in the following table:

ATC Available with Adreno GPU.
PVRTC Available with a PowerVR GPU.
DXT1 S3 DXT1 texture compression. Supported on devices running Nvidia Tegra platform.
S3TC S3 texture compression, nonspecific to DXT variant. Supported on devices running Nvidia Tegra platform.

That’s a lot of formats, revealing a different problem. How do you choose which format to use?

To best support all devices you need to create multiple apks using different texture formats. The Google Play developer console allows you to add multiple apks and will deliver the right one to the user based on their device. For more information check this page.

When a device only supports OpenGL ES 2.0 it is recommended to use a proprietary format to get the best results possible, this means making an apk for each hardware.

On devices with access to OpenGL ES 3.0 you can use ETC2. The GL_COMPRESSED_RGBA8_ETC2_EAC format is an improved version of ETC1 with added alpha support.

The best case is when the device supports the Android Extension Pack. Then you should use the ASTC format which has better quality and is more efficient than the other formats.

Adaptive Scalable Texture Compression (ASTC)

The Android Extension Pack has ASTC as a standard format, removing the need to have different formats for different devices.

In addition to being supported on modern hardware, ASTC also offers improved quality over other GPU formats by having full alpha support and better quality preservation.

ASTC is a block based texture compression algorithm developed by ARM. It offers multiple block footprints and bitrate options to lower the size of the final texture. The higher the block footprint, the smaller the final file but possibly more quality loss.

Note that some images compress better than others. Images with similar neighboring pixels tend to have better quality compared to images with vastly different neighboring pixels.

Let’s examine a texture to better understand ASTC:

This bitmap is 1.1MB uncompressed and 299KB when compressed as PNG.

Compressing the Android jellybean jar texture into ASTC through the Mali GPU Texture Compression Tool yields the following results.

Block Footprint 4x4 6x6 8x8
Memory 262KB 119KB 70KB
Image Output
Difference Map
5x Enhanced Difference Map

As you can see, the highest quality (4x4) bitrate for ASTC already gains over PNG in memory size. Unlike PNG, this gain stays even after copying the image to the GPU.

The tradeoff comes in the detail, so it is important to carefully examine textures when compressing them to see how much compression is acceptable.

Conclusion

Using hardware accelerated textures in your games will help you reduce the size of your .apk, runtime memory use as well as loading times.

Improve performance on a wider range of devices by uploading multiple apks with different GPU texture formats and declaring the texture type in the AndroidManifest.xml.

If you are aiming for high end devices, make sure to use ASTC which is included in the Android Extension Pack.

Sky Force 2014 Reimagined for Android TV

By Jamil Moledina, Games Strategic Partnerships Lead, Google Play

In the coming months, we’ll be seeing more media players, like the recently released Nexus Player, and TVs from partners with Android TV built-in hit the market. While there’s plenty of information available about the technical aspects of adapting your app or game to Android TV, it’s also useful to consider design changes to optimize for the living room. That way you can provide lasting engagement for existing fans as well as new players discovering your game in this new setting. Here are three things one developer did, and how you can do them too.

Infinite Dreams is an indie studio out of Poland, co-founded by hardcore game fans Tomasz Kostrzewski and Marek Wyszyński. With Sky Force 2014 TV, they brought their hit arcade style game to Android TV in a particularly clever way. The mobile-based version of Sky Force 2014 reimaged the 2004 classic by introducing stunning 3D visuals, and a free-to-download business model using in-app purchasing and competitive tournaments to increase engagement. In bringing Sky Force 2014 to TV, they found ways to factor in the play style, play sessions, and real-world social context of the living room, while paying homage to the title’s classic arcade heritage. As Wyszyński puts it, “We decided not to take any shortcuts, we wanted to make the game feel like it was designed to be played on TV.”

Orientation

For starters, Sky Force 2014 is played vertically on a smartphone or tablet, also known as portrait mode. In the game, you’re piloting a powerful fighter plane flying up the screen over a scrolling landscape, targeting waves of steampunk enemies coming down at you. You can see far enough up the screen, enabling you to plan your attacks and dodge enemies in advance.
Vertical play on the mobile version
When bringing the game to TV, the quickest approach would have been to preserve that vertical orientation of the gameplay, by pillarboxing the field of play.

With Sky Force 2014, Infinite Dreams considered their options, and decided to scale the gameplay horizontally, in landscape mode, and recompose the view and combat elements. You’re still aiming up the screen, but the world below and the enemies coming at you are filling out a much wider field of view. They also completely reworked the UI to be comfortably operated with a gamepad or simple remote. From Wyszyński’s point of view, “We really didn't want to just add support for remote and gamepad on top of what we had because we felt it would not work very well.” This approach gives the play experience a much more immersive field of view, putting you right there in the middle of the action. More information on designing for landscape orientation can be found here.

Multiplayer

Like all mobile game developers building for the TV, Infinite Dreams had to figure out how to adapt touch input onto a controller. Sky Force 2014 TV accepts both remote control and gamepad controller input. Both are well-tuned, and fighter handling is natural and responsive, but Infinite Dreams didn’t stop there. They took the opportunity to add cooperative multiplayer functionality to take advantage of the wider field of view from a TV. In this way, they not only scaled the visuals of the game to the living room, but also factored in that it’s a living room where people play together. Given the extended lateral patterns of advancing enemies, multiplayer strategies emerge, like “divide and conquer,” or “I got your back” for players of different skill levels. More information about adding controller support to your Android game can be found here, handling controller actions here, and mapping each player’s paired controllers here.
Players battle side by side in the Android TV version

Business Model

Infinite Dreams is also experimenting with monetization and extending play session length. The TV version replaces several $1.99 in-app purchases and timers with a try-before-you-buy model which charges $4.99 after playing the first 2 levels for free. We’ve seen this single purchase model prove successful with other arcade action games like Mediocre’s Smash Hit for smartphones and tablets, in which the purchase unlocks checkpoint saves. We’re also seeing strong arcade action games like Vector Unit’s Beach Buggy Racing and Ubisoft’s Hungry Shark Evolution retain their existing in-app purchase models for Android TV. More information on setting up your games for these varied business models can be found here. We’ll be tracking and sharing these variations in business models on Android TV, including variations in premium, as the Android TV platform grows.

Reflecting on the work involved in making these changes, Wyszyński says, “From a technical point of view the process was not really so difficult – it took us about a month of work to incorporate all of the features and we are very happy with the results.” Take a moment to check out Sky Force 2014 TV on a Nexus Player and the other games in the Android TV collection on Google Play, most of which made no design changes and still play well on a TV. Consider your own starting point, take a look at the Android TV starting point on our developer blog, and build the version of your game that would be most satisfying to players on the couch.