Koalabeast Games

Page 4 of 4

TagPro Next Status: Artist, Flags, Joiner and Engine

TagPro Next has an artist! We’re really excited to be moving forward with Sergey Basov as the artist for the new TagPro. Sergey is an experienced game artist who has worked on assets for Game of Bombs – A great web game that you should check out. Here Sergey’s first draft of the new flag and base for TagPro Next:

TagPro Next Flags

The first iteration of the new joiner is complete and code reviewed. It’s looking awesome and will really allow us some flexibility on how we join players to games. For example players will be able to specify if they’d like to play in casual games or stat games – and which game modes they’d like to participate in. There is still plenty to do on the joiner, but it’s coming along nicely.

A lot of progress has been made on the prototype game engine. So much so that it will likely become the foundation of the new engine. We’ll be putting some work into adding a UI that will let us easily adjust movement and physics settings, so we can find the perfect feel for TagPro Next. Here is a video of LuckySpammer playing around in it:

Just a reminder, we don’t post all blog update to /r/TagPro – just the most important ones. However we do post all updates to /r/Koalabeast.

Discuss this post on Reddit

RPC, ES6, and Proxies, Oh My!

Today we’ll be kicking off the first in a series where we go into more technical stuff than we have in the past. Not everyone will enjoy or even understand these posts, but we hope that it will give those of you familiar with javascript some neat ideas.

Our experience with TagPro made us weary about large monolithic servers. The DDoS problems grew out of control and there was no great way to get around it. As long as we have large servers, we’re vulnerable to simple DDoS attacks. While thinking about the design of Next, we realized that many of our problems could be alleviated by switching to microservices.

We have a microservice for database access, a microservice for player management, stat management, the joiner, and more. This gives us great flexibility and a lot more resilience than in the current model. However, any of you who have tried using microservices before know that communication is problem number one. There are lots of prerolled solutions out there, but all of them have issues in one way or another.

All we wanted was a dead simple RPC library, and no active library we could find would support what we wanted without significant hacking. So, we decided to build our own. We call it Intercom.

Before delving into any of the code, some background. We are using ES6 extensively throughout this project; most of it is transpiled via babel (formerly 6to5), but there are a couple of features of ES6 that are not transpilable. The one we care most about for the purposes of this discussion is the Proxy object. There’s not a huge amount of documentation for it out there, and its syntax is fairly wonky, but it’s immensely powerful.

A good comparison for it is __getattribute__ in python. Any time you attempt to look up a property on an object, the proxy will intercept it and allow you to override it. So if I call x.blah(), the proxy (x) can decide to send whatever it wants for the “blah” member, which will then be invoked. It’s a great way to accidentally get into infinite recursion.

Why does this matter? It allows us incredibly clean syntax, as you’ll see shortly.
There’s one more thing you should know. We’re using the wonderful ‘co’ library to make our async much cleaner. As a tl;dr, it abuses ES6 generators simulating the async/await features coming in ES7. So for example, it turns this:

Into this:

While it may not seem that amazing in such a small example, when your results depend on 8 different callbacks it makes all the difference in the world.

So, with that out of the way, let me show you how this library is actually used. It’s gone through a lot of iterations, but I think we’re rapidly approaching what the final syntax will look like. Here’s an example of its use from the tests:

Now, this is with an already connected RPC client, but if you have any experience with JS rpc libraries you can see a big difference, in that you can call the actual method name as a method. Before we implemented the proxies, here’s what it looked like:

While still usable, it certainly wasn’t clean. We also had a version that didn’t require the []’s, and also another one that use objects instead, but none of them were satisfying to me. I knew we could do better.

So, how does this magic proxy work? I’ll deconstruct the code below:

It’s a formidable block of code, but we’ll take it one step at a time.

Pretty simple start. We’re just creating a proxy to be used later, and requiring a scope argument. Due to the fact that a proxy is an object we need to provide an explicit outside scope as this can’t be rebound in an object, and that’s what self is.

This is where the magic begins. The Proxy object is a built-in in ES6, and we can only use it using a special experimental flag on io.js, but it’s worth it.

This is called whenever a lookup is done on the object in question. Spoiler alert, we assign this proxy to .rpc, which means if we called yield client.rpc.add(99, 98), then o will be the actual object (not actually used), and name will be add.

This just checks if the object actually has the property we want, and if it does, return it.

This uses the new ES6 syntax allowing a variable number of arguments. Basically we’re creating a decorator function which can accept any number of arguments and will pass it along the network and handle everything else.

Creates a promise, pretty standard stuff.

This allows us flexibility over how long we want a request to hang for. Due to the microservice nature of Next there’s always the potential for a server to be down or hanging, and we want to be able to deal with that if that’s the case.

We’re just grouping together the options to be received by the intercom library on the other side. Nothing fancy, except args is that array from before that I mentioned.

This uses socket.io (for now, we may change to something more low level eventually) to just emit the RPC call to the server, and reject it if it had an error, or resolve it if it didn’t.

So, that’s the client side code. The server side code is quite a bit more complex!

Boy, that’s a lot less clear than the client side code.

All this is doing is handling connections, and once connected it checks if we’ve defined a server api. Let me show you what a server API looks like so you can understand what it’s doing from here on out:

All this does is create a server on 9090, and put those functions into the API. So in the previous code, it’s just looping through those functions.

So now we set up a wrapper. We accept any args and a callback. If there are no arguments, we make an empty array. We push the callback onto the end so if a function wants to handle the callback manually it can.

Then we check if it’s a generator function. If it’s a generator, we want to run it in co before we do anything (like slowEcho above.) So if it is, we apply it with socket as this and then handle the callbacks. If it’s not, we run the function and check if it returned anything. If it did, then we complete the callback with that return data. If it isn’t a generator and it isn’t returning anything, we assume the function is handling the callback itself.

Then we just store that wrapper as what should be called whenever that name is called. This is a Map, a new ES6 data structure very similar to dictionaries in other languages.

Finally, we set up the listeners. If we get any data on the ‘rpc’ channel, check if it has a command. If it has a command, check if we have it. If we don’t, error that back to the caller. If we do, run that wrapper function!

It’s a bit of behind the scenes setup, but it allows us to very easily talk to microservices without the usual syntactic overhead. A simple connection and we’re calling dot methods without ever having to communicate with the original server to get a list. This is also useful as it allows us to modify the API on the fly without the client needing to be notified about it.

As an aside, this library will be open-sourced as we feel it has a lot of utility outside of TagPro.

That’s all for this week. If you have any questions about this code, please do feel free to ask!

Discuss this post on Reddit

TagPro Next Status – 2015-05-10

Over the week we worked with three artist on a single asset: the flag. It’s going really well and it’s a lot of fun watching the artists go from concept to polished asset. By next week, we should have the TagPro Next artist selected!

The author of another web game approached us over working together going forward. Not necessarily directly on each other’s products, but in more of a partnership. It’s likely the two teams will meet soon to discuss further.

We are developing a scaled-down prototype engine for experimenting the new “feel” of TagPro. It will allow us to easily test things like friction, acceleration, max speeds, boosts, ect. It’s also testing the normalized movement – being that moving diagonally will no longer allow you to move faster. It’s possible in the future we will produce a build of the prototype with the values we decide on for community feedback.

AMorpork plans to produce a blog entry the coming week on our inter-process communication layer for TagPro Next: Intercom. It’s fairly unique and a piece we plan on open sourcing.

Discuss this post on Reddit

TagPro Next Status – 2015-05-03

We received a ton of great submissions from our ad for a game artist. After meticulously reviewing them, we narrowed it down to four artists. We’ve decided to work with each artist to create an animated flag asset. This will allow us to get to know the artist, their process, their style and the overall cost of bringing an asset from concept to final product.

We made some progress this week on the automated server spawning. The service is actually spawning servers now. We plan to integrate the code that estimates the best server layout with the code that does the actual spawning this coming week.

Continued progress on the world joiner. Its initial iteration is nearly finished and it should be ready for a code review soon. The joiner matches players by their preferred (lower latency) servers.  A rank matching system is in development and matches players together who are closer in skill. Because this may take a long time to find an exact match, the joiner will slowly increase the skill gap needed to find players with similar rank. The intention is to quickly find players with a similar skill. The measurement to determine a player’s rank has not been determined, but will most likely be based on recent win percentage. Research into skill measurement systems like ELO, TrueSkill, and Glicko systems is also being considered. Work to integrate the joiner and server spawner will need to begin soon.

As we are fairly certain the client will be built using React components, we’ve begun exploring the FluxThis framework for the client.

Discuss this post on Reddit

We are looking for the right artist to supply us with assets for TagPro Next.

Read the Full Ad

Discuss this post on Reddit

The Next TagPro

stuff

TagPro has been a side project for over two years and it’s been a lot of fun watching it grow. Through the growth I’ve met a lot of super talented folks that wanted to help and they have become my friends. steppin, AMorpork, NewCompte and ylambda care about TagPro just as much as I do and I’ve decided to share ownership of Koalabeast with them. We all believe with the help of the community, we can take this to the next level.

What does this mean for TagPro “Classic”?

For the rest of this post, I’ll refer to the current TagPro we love as TagPro Classic. We will continue to maintain and support TagPro Classic. Users should not experience a degradation in quality. As far as new major features, the code is considered closed at this time. However we will continue to fix bugs, implement minor features and create events. There are no plans to sunset TagPro Classic. We love TagPro Classic.

Announcing TagPro “Next”

We are completely rewriting TagPro from the bottom up – code named TagPro Next for now. Not just the game and graphics but the underlying server side architecture. Every piece of TagPro will be rewritten and vastly improved. We’ll also be laying down a much better foundation that will allow us more flexibility in bringing new features forward. While TagPro Next will be highly inspired by TagPro Classic, it will not be a direct clone – we aim to improve the gameplay, while maintaining the basic simplicity we all enjoy.

Platforms
Initial releases will be targeted at Steam, Web Browser, Facebook and Kongregate. Not necessarily in that order. Thereafter we’ll be exploring Wii, Android and iOS.

Price
Free with similar revenue model as TagPro Classic: Purchasing cosmetic only flair, ball skins (heh) and honks (maybe). There will be ads on the Facebook and Kongregate builds – that’s out of our control. We’ll probably have ads on the Browser version. TagPro will never be pay-to-win.

Some Initial Plans

The list below is very preliminary and not a complete list.

Polish
Lots of visual polish compared to TagPro Classic. The spike concept art at the top of this post is a good example of what we’re aiming for. We also aim to produce a polished user interface.

Ranked Play
Ranked team play will be a top priority in TagPro Next. Players will be able to form official teams and enter the match maker to find other teams looking for a ranked match.

Lobby System
We plan to implement a feature rich lobby system. There will be rooms (public or private) to hangout in and assemble matches.

Friends
A fully featured friend system.

Game Modes
The initial release will focus on the best game mode: Mars Ball. I’m kidding, it will focus on Capture the Flag. However we are designing the system in such a fashion that we’ll be able to easily introduce new game modes in the future.

Server Spawning
TagPro Next will not have named servers. Instead servers will spawn and despawn automatically based on players online and their geographical location. There will be a single point of entry for joining a game and it will place you in the best possible server at the moment.

Stats
We plan to do a much better job with stats. Each game mode will be able to share stats with other game modes or have completely unique stats.

Map Making
A map editor and tester will be built into TagPro Next.

Lag
Latency is always an issue, we aim to explore using UDP to reduce lag.

API
We will supply a proper API for modding as well as a server API for stats.

Community, What You Can Do

We can’t do this without your support. Much like I handled the initial development of TagPro in the early days, we plan to be very transparent with our progress. We plan to use this progress blog and while it will initially be rather bland as we lay down foundation code, we hope that it will become a place of unified excitement and criticism. We’ll also be updating /r/tagpro and /r/koalabeast with posts for subscriptions and discussion.

Keep Playing TagPro!
Keep inviting your friends and keep rolling. Your skills will carry over and we plan for some of the flair to carry over also!

Alpha Versions
Once we have prototypes, you’ll be the first to know. We plan to have some way of inviting TagPro Classic players to play alphas in waves. Likely based on time-played or special invite.

Beta Versions
We plan to have an open beta period just before we launch the steam greenlight program. We’ll need the community’s support to get past steam greenlight program.

Comments/Suggestions
We want to know what you expect out of this project. Community feedback has always been important while developing new features for Classic and Next is no different.

Discuss this post on Reddit

Newer posts »

Copyright © 2019 Koalabeast Games

Theme by Anders NorenUp ↑