Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Building a Distributed Turn-Based Game System in Elixir (fly.io)
352 points by itsjloh on April 30, 2021 | hide | past | favorite | 115 comments


I'm currently building a distributed mmo emulator using elixir, and it's going well (performance/bug/effort wise). Distributed via horde/libcluster libraries, ranch as acceptor pool, using livebooks for debugging and performance testing.

I couldn't really ask for anything better, as solo dev in less than 100h basic distribution solved(one process per map, one per player. Maps are spread around the erlang cluster using horde, if a map crashes players seeemlesly reconnect in less than a second, and the map may actually be on a whole different system, especially when the node becomes non viable) Basic monster ai, fighting logic. Respawn/monster drops, extensive ddbuggability using observer (gui tool included in erlang, allows me to send messages to processes/see their status/ what they do, maybe kill them in the whole erlang cluster ) Can just use livebooks to do more extensive programming (just connect to cluster, execute a mark down files code blocks, see results live ingame and in the livebook)

It's a very great and easy language for this. I may consider moving couple of things to rust using rustler tho for the fun of learning about nifs


This sounds quite interesting!

Any chance you mind sharing that code on GitHub?


It's on gitlab. But a lot stuff isn't on it yet/changes a lot, and I take my time :)

https://gitlab.com/zen_core/zen_core And u need assets from the original game to use it

I made a small example clip to show how I use livebook to simulate player load(they use exactly the same genserver as real players, just that the packets are already deserialzed )

https://cdn.discordapp.com/attachments/819516082154242048/83...


Damn, HN would be the last place where I would expect to see Metin. Is the community still there? It was huge in Central Europe ~15 years ago.


No clue, I left the community around 2010, returned 2014 when the source been leaked for teaching myself about mmo development, I really don't know outside than some private servers still having 10+k players and that a few legitimate businesses /other games came out from the community


Looking at your video, performance does not look good though which is expected from Elixir, Erlang is a pretty slow language.


What kind of performance issue do u notice? It's a relatively old video, at 500 simultaneous players lna single map it takes around 25ms to update the ai+quadtree+ a list update of who sees who, meanwhile processing 500 moves + broadcasting these Infos to everyone in the view (which are all 500)

This is worst case for it, the other maps do not see any load by this and its basically a single core bottleneck, also it's very unoptimized code 500 processes sending messages to a single map process


As soon as the 500 players are spread either around the map (so that it doesn't have to handle 500 entities sending updates to all other 500 ones ) it takes around 3 ms

Or if they spread across all 70+ maps we see a good utilization of every scheduled but still most map processing spending most their time idling


The graph we're looking at when it jumps to 75%+ CPU usage is it single core? Also using 700MB+ of memory.


there u go a updated video of current version https://cdn.discordapp.com/attachments/819516082154242048/83...

5 times the players(250), tops out at ~35% scheduler utilzation


I can't overstate how good Elixir + LiveView is, especially for prototyping. I have been experimenting with it ever since it first came out, mostly games, and I have to say that building even simple multiplayer games is a lot of fun.

An example:

https://dev.voppe.it/chess

(FYI there is no actual game here, just pick emojis, place them in the arena and watch them fight to the death via a poorly designed combat system. Yes, those placing emojis other than you are other players.)

This was a prototype that I built to see if real-time multiplayer games were feasible. Apparently LiveView manages high tickrates decently enough to be a valid solution for (quasi) real-time games. This game for example runs at a whopping 8 ticks/s! It can handle more, as I've tried developing with faster tickrates, I think as high as 24 t/s, but I want to avoid server strain. With enough HTML/CSS/SVG wizardry you can get away with quite a lot. But the most amazing thing was the fact that there was no need to fiddle around with state syncing. Everything just works! I have nothing but praise for it.

Try LiveView if you have the chance!


I'm curious about how race conditions would be handled when multiple users, on different regional LiveView servers, take conflicting actions.

In the "Let's walk it through" section, it seems like the Player-to-LiveView connection will process user input (e.g. a Tic-Tac-Toe move) and update the UI to acknowledge this, at which point the user can be assured that the LiveView server accepted their input. But it seems like this happens before the GameServer has also accepted the input. What if Player 2 made a conflicting play and their change was accepted by the GameServer before Player 1's change reached the GameServer?

Given, in Tic-Tac-Toe, the game is simple enough that this is neatly avoided: each regional LiveView server has enough information to only allow the current player to make a play. But in more complex applications, how might you (anyone; curious for discussion) handle this?

One answer is something like: The LiveView server is effectively producing optimistic updates, and the GameServer would need to produce an authoritative ordering of events and tell the various LiveServers which of the optimistic updates lost a race and should be backed out.


> What if Player 2 made a conflicting play and their change was accepted by the GameServer before Player 1's change reached the GameServer?

Not sure I understand the question but, I don't see how this would happen.

On the BEAM it's processes all the way down. There's a process for that instance of the game, which is basically a big state machine, and 2 processes representing the client state, one for each player.

When the game (process) starts, it expects a message from player 1 (process), then one from player 2, and so on.

If there's a client timeout or network disconnection, the player process affected crashes, and if the app has been architected well, the other player process and game process are in a supervision tree, so they crash as well, perhaps notifying the other player that the game has ended because of a disconnection from the other peer.

But none of this will accept a move from the player 2 when it's player 1's turn.


Thanks for your reply!

This is very interesting - I'm pretty unfamiliar with BEAM. Does this "processes all the way down" span across machines/VMs?

From the article, it seemed like there could be two players, each connecting to different LiveServer instances (on different VMs/hardware in different geographic regions) which in turn communicate async via one central GameServer.

In the article, it seems like a message from Player 1 to LiveServer 1 doesn't need to wait for the message to also reach the central GameServer and be acknowledged before LiveServer 1 acks the change back to Player 1. This seems to allow races, since the central GameServer is the source of truth but the Player1/LiveServer1 communication can complete a message/ack round-trip without waiting for acknowledgement from the GameServer.

I guess an alternative would be for the system to require a message from Player 1 to be passed to LiveServer 1, then passed on to the central Game Server which acks back to LiveServer 1, which finally can ack back to Player 1 -- this means that Player 1 would still need to pay full round-trip latency to LS1 and then to the GameServer for any action.

Thanks for any light you can shed on this!


Here's the relevant part from the article:

> The browser click triggers an event in the player's LiveView. There is a bi-directional websocket connection from the browser to LiveView.

> The LiveView process sends a message to the game server for the player's move.

> The GameServer uses Phoenix.PubSub to publish the updated state of game ABCD.

> The player's LiveView is subscribed to notifications for any updates to game ABCD. The LiveView receives the new game state. This automatically triggers LiveView to re-render the game immediately pushing the UI changes out to the player's browser.

So you can see that when Player 1 does an action, the action is sent to the GameServer. Player 1's UI is only updated when the GameServer has published the new game state via PubSub back to Player 1's LiveView process, that pushes it onto the client. So there is the latency of going from client to LV to GameServer and back again, but there is no race possibility.


> I'm pretty unfamiliar with BEAM. Does this "processes all the way down" span across machines/VMs?

yes for example if u had a process named on a different Machine(Node called in Erlang) called "Alice", u could from a different Node send it a message using the Node Identifier as additional parameter example:

[coolest_node | _rest_of_nodes] = Node.list()

Process.send({Alice, coolest_node }, :hi)


Ah, that's indeed a good question, though those are implementation details of Fly.io I'm unaware of.


You could use CRDT for more complex games which are not turn based.

https://moosecode.nl/blog/how_deltacrdt_can_help_write_distr...


It is turn based and sync is easy, whatever you do unless you are the only one in turn you can be safely ignored. Once move to non-turn based ...


It's the same as with your mobile phone when you lost your wi-fi signal. Everything pauses and everybody has to wait.

Have you played games like HOMAM 1 or 2? You can't do anything when the CPU is playing its players. You can watch where he goes and what he does but that's it. When he is finished then you go.

When there is a network error - some message or please wait... or loading spinner message should be shown in the meantime.

For turn based RPG games or Chess etc. this is a non issue.

Of course, real time action games etc. is not a good idea for this technology.


Your answer is pretty close to what most people do: https://en.m.wikipedia.org/wiki/Client-side_prediction


There's no need for client side prediction or optimistic UI on (most) Live View projects.

It's all done on the server.


Latency is the reason. Even in a turn based game it still feels really bad to make a move and have to wait for it to make its way through the round trip before seeing the result. In a game with strict ordering like Tic Tac Toe there is little reason not to show the chosen move immediately.


Sure, that's why I meant most use cases.

I mean, 100 ms between a click and a cross appearing on screen is not great user experience, but it's not even the worst. If you're writing a game, a little client side prediction is a good idea.

But if you have a form with instant validation, or any old regular UI, that is not necessary at all. The only built in optimistic UI functionality on Live View is disabling a button when you press it and wait for the server to respond, to avoid double submissions.


> But if you have a form with instant validation, or any old regular UI, that is not necessary at all.

Arguably because you're trusting the client and essentially the built or built-in behavior is therefore optimistic by default. Then hopefully validating on submission server-side.


From the tech talks I vaguely recall, LiveView folks seem to disregard latency, which is where the entire model falls apart for me because the moment you need more control on the client over what to do when the server is not responding - you’re entirely out of luck.

Though maybe I’m wrong and there has been some new developments to address this, I wasn’t following too closely.


On the contrary, LiveView documentation acknowledges this and suggests to handle such scenarios using client side tools:

There are also use cases which are a bad fit for LiveView: Animations - animations, menus, and general UI events that do not need the server in the first place are a bad fit for LiveView. Those can be achieved without LiveView in multiple ways, such as with CSS and CSS transitions, using LiveView hooks, or even integrating with UI toolkits designed for this purpose, such as Bootstrap, Alpine.JS, and similar.

https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.html#m...


Sorry, that’s not acknowledging that latency can become an issue, that’s acknowledging that using server-side rendering for things that don’t require a server isn’t the best of ideas (shocker, I know).


You would have latency in all apps that require a server round-trip regardless of the stack used.

When you need to go to the server, you go to the server. There's no other way around it.

I would be curious to hear how you solve this in other stacks? SPAs, whatever, when they need something from the server, they reach for the server.


imagine an SPA for a basic CRUD system. there's a list view and details view with a delete button that returns you to the list.

in liveview server renders me the list view, i click details, server renders me details view, i click delete button, server renders me the list view.

if there's big latency/connection error/etc between clicking delete and getting back the rendered list - user just has to wait.

in spa i could optimistically assume that delete worked and render the list that i already have cached without the deleted item, allowing user to continue working immediately and if there was a disconnect/error - i could retry that delete in the background without bothering user, only prompting them after some number of retries.

don't see how could i implement this workflow in liveview.


You can do that in LiveView just as easily. Remove the item client side, then pushEvent to the server to handle the deletion. In case of any errors, notify the user, refresh the state .etc

pushEvent, pushEventTo (from client to server) [0]

push_event (from server to client) [1]

[0] https://hexdocs.pm/phoenix_live_view/js-interop.html#client-...

[1] https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.html#p...


> Remove the item client side

so, "just use JS"?

> In case of any errors, notify the user, refresh the state .etc

so, "just use JS"?

every time you say "just use JS" you're diminishing the usecase of liveview because if i need so much js logic - why do i need to also use liveview if i can just use a framework/environment where i can share codebase between client and server seamlessly and have full control.


You stated don't see how could i implement this workflow in liveview.. I've presented you a way.

In case of any errors, notify the user, refresh the state .etc => This would all be done server-side and the client side would simply react automatically. The client side code in this case would have been minimal.

I don't think I'm diminishing anything. For quite a few years I was neck deep in React/Vue world. Now that I'm actively using LiveView, I can properly compare the differences between both approaches, cons and pros. For any new project, in the majority of cases I would pick Elixir with Phoenix LiveView instead of Elixir/Phoenix (backend) with React/Vue (frontend).


You’ve presented a workaround and a hack tbh, not something natively supported because the workflow doesn’t map to liveview model, which is fine but you have to be honest with yourself and acknowledge when stuff like that happens, otherwise you’re in for lots of fun down the line.

> For any new project, in the majority of cases I would pick Elixir with Phoenix LiveView instead of Elixir/Phoenix (backend) with React/Vue (frontend).

This could just be recency bias. New tech is always exciting, old tech is always linked to memories of all the issues you’ve had in the past.


I wouldn't say it's a hack, but I would agree it's not the standard way to do things in the LiveView world, exactly because latency it's an overstated or misunderstood issue. But if you want to do more, LiveView gives you the tools.

> This could just be recency bias. New tech is always exciting, old tech is always linked to memories of all the issues you’ve had in the past.

I still maintain some React/Vue apps and work with on a daily basis, so it's not a distant memory.

I like choosing the right tool for the right job. For example, I would still choose ReactNative over 2 different code bases for a mobile app for a small team that needs to move fast. For the 5% of cases when that wouldn't do you would need to go native. I see the situation similarly with LiveView. It's hard to beat its productivity & power in 95% of use cases.


I did a deep dive into liveview and this was my take away.

Its nice tech, but once you start introducing JS again to improve UX you really start asking yourself why you didn't just build it with react in the first place.


You could just delete the row with alpine or do a 3 line JS hook if you wanted to, it's quick and easy. That sounds a strange workflow though, it's generally better to make users wait for deletion.


isn't it funny that when you're trying to praise tech you like, all sorts of examples jump into mind, but when you try criticizing something you like - all that imagination vanishes and all existing examples can be dismissed as strange :)


I find it strange because that's not a behaviour I would use, but to each their own, it's the beauty of the web :).

If you really want to do it, you can add 3 lines on your project and that will work with any CRUD page you're building, I don't think that's unreasonable or difficult to do.

Edit:

Actually thinking about it, if you just made a form for that delete button and a phx-disable-with="" on the row it would probably work straight away without any JS hook.


surely you recognize that there is a gap in functionality between liveview and fully fledged frameworks that provide more granular control over ui interactions?


Like which other frameworks? You can code that example feature you pointed out quicker than in React if you want to. You have all the control you want in LiveView.

If you want a delete button per row, no code is needed and the phx-disable-with will work out of the box, if you want a global delete button on the top which deletes multiple rows front-end first before acknowledgement (with checkboxes + delete like in Gmail), 5 lines of JS maximum in a hook and you're set.


> You have all the control you want in LiveView

that you can't even acknowledge that there is a gap in functionality between liveview, a fairly opinionated framework for server side rendering and fully fledged client-side frameworks tells me this is not going to be a productive conversation, so i'm out, bye


Have you even used LiveView? It's not opinionated in any way, you can do whatever you want with it. It gives you extra features to remotely change pages but if you don't like having them you don't have to use them at all and can plug your favourite JS framework if you want to (or you can just use it for parts of the apps and not the rest if you want to).

I've worked for years with React and Angular and I don't really miss anything with a LiveView-based stack. LiveView features gives you 90% of what you want out of the box and for the rest it's fine having a bit of JS here and there to ensure a good experience.


What exactly are you thinking in terms of latency becoming an issue under LiveView but not on normal requests?

Do you mean when websites just need a full refresh because they lost their requests on some callback and no-one implemented recovery across the 5 levels of callbacks or something more specific?


Well with Liveview you go "full server state" for everything that you would normally just use plain JS for. For instance, toggling a checkbox or collapsing a div.

Having latency on such lowlevel interactions might make the UI feel sluggish as a whole.


Yeah certainly, but I'm not sure people are using it that way?

Most examples are to show cool stuff you can do, they're not production vetted. Like most JS examples out there don't really mean that people should be publishing live credentials with their bundles.

I imagine in most cases one would leave everything that is not behind a logged in status as normal routes/pages (signin, landing, contacts, etc). Or if not that those would be things requiring a socket/real-time interface anyway.

For the interactions, I don't think you even need to use alpine.js. Plain setup on DOMContentLoad, CSS Dropdowns/collapsibles that are replaced on JS load, proxying LiveView DOM/Morphdom events (if needed) so other components (even vue,react, etc) can listen to them, and CSS animations.

  import { setup_live } from "./phx/setup_live.js";
  import { setup_dropdowns } from "./interactivity/dropdowns.js";
  import { setup_collapsibles } from "./interactivity/collapsibles.js";
  import { setup_links } from "./interactivity/links.js";
  import { setup_inputs } from "./interactivity/inputs.js";

  function setup() {
      setup_live();
      setup_links();
      setup_dropdowns();
      setup_collapsibles();
      setup_inputs();
  }

  document.addEventListener("DOMContentLoaded", setup);


I went as far as having `onclick` handles and global window functions. Complete heresy. Yes, it's not 100% JS free, but it's pretty much low overhead.

Then LiveView is mostly for your admin dashboards and logged in users views, where it makes it pretty easy to do real-time feedback type of interactions/views and spa like navigation. Since you have proper auth and identification on the user, you can just log them off, rate-limit, block an account, and close their socket if needed.


Have you figured out any good way to have per-page javascript where the JavaScript is only sent over the wire for that pages?


From the top of my head not really, but this would depend on a few things:

- Is it a vendor lib ?

- It's not but is some particular file that is big enough to not make sense including on the root layout?

- It's neither, but functionality that can trigger multiple times and should be only once and only on those pages because it can conflict? or some variation of that?

I think they're all solvable but what makes sense will depend on those, but also on how you're using LiveView (is it LiveView only for logged-in users/some auth, can you set those on the live_view layout...)

But in some cases this is a problem also in spa's, where you have to use a snippet to check if the lib has been loaded, if not add a script tag to the body, or load it through js, etc...


You don't have to. You can totally use JS for these in Liveview



Possibly many LiveView tech demos / projects by the community haven't had much thought into latency, but LiveView itself even contains a latency simulator[0] built-in. Additionally, it can toggle classes on elements when you click them and turn them off again when an acknowledgment has been received from the backend [1]. Finally you have the JS hooks, through which you can just implement any kind of loading indication you want on the frontend. So the tools are there, they just need to be used.

[0] https://hexdocs.pm/phoenix_live_view/js-interop.html#simulat...

[1] https://hexdocs.pm/phoenix_live_view/js-interop.html#loading...


One trick I remember using (~two years ago, so early LV) when handling click events was to put everything async/not needed to reply in a spawn() function.

But yes as soon as you're on the internet you'll often feel the delay if your app is interactive.

The problem is that it's a bit random, because the network and the VM performances are never totally linear.

I remember implementing a countdown (using 1s send_after()) that would work fine most of the time, but sometimes there would be some hiccup and the countdown would stall just a bit and then process the counter in an accelerated fashion, which was terrible from a UI point of view, so in the end I did it in JS except for the update once the end reached.


Great article. I'm currently building a turn-based multiplayer improv piano game for the web using Elixir/Phoenix.

In a nutshell, you compete with other players on creating freestyle piano solos over backing tracks using a MIDI keyboard. Players and audience members then listen through the solos and vote for their favorite.

The Elixir/Phoenix stack has been an absolute superpower for building my game backend. Some examples:

    - Phoenix channels as a wonderful abstraction over 
      managing WS connections. Implementing basic chat took ~3 
      minutes on the backend.

    - The ability to model game logic with a finite state 
      machine in a GenServer. The lifecycle of a game round is 
      progressed forward by receiving incoming client events.

    - The ability to have many game servers 
      running simultaneously as a dynamic pool under a 
      DynamicSupervisor. Games can end and new games can be 
      created, all isolated and under watchdog-like 
      supervision.

    - ETS as an out-of-the-box memory cache for session data. 
      Persisting user data between pages and sessions without 
      needing to deal with an actual persistence layer.
For the curious, the project is called Midi Matches and is currently in public alpha:

Game (desktop chromium only): http://midimatches.com/

Gameplay Video: https://www.youtube.com/watch?v=UVD2wOCB_jE&t=41s

Repo: https://github.com/henrysdev/midimatches


I always think it's amazing how much simpler your architecture can get if you don't move any and all state to some other service in the name of keeping your app stateless. Of course there are good reasons why the industry has moved towards stateless business logic services but with all the BEAM goodness (resiliency, hot code reload) it might be feasible to write apps that actually hold state again.


Apps with state are wonderful to work with. I'm pretty excited to build apps that cache in process and don't rely on 47 external services to do CRUD.


Agreed. Elixir/BEAM gives you some really interesting new ways of solving problems. A couple of years ago I built a subscription system that interacted with Stripe that only kept the state whilst a user was interacting with the system (used a GenServer as a read-through/write-through Stripe API cache that exited after they stopped using it). It was by far the least irritating subscription system I've ever worked on.


Interesting. Do you mind (and if you can) elaborate on how you model that subscription system?


Sure thing! Basically I set out to use Stripe's built-in tools as much as possible, rather than duplicating the state on our side - storing partial data in your own DB is one of the biggest pain points of subscription systems in my experience. So for this app it didn't have a DB at all - Stripe was the source of truth.

The way it worked was fairly straightforward: a user would click a link from their account area and it would include their Stripe customer ID in the request (all our users had already had accounts created due to our main booking system). The app then span up a new GenServer to represent that customer, which would pull down all the data it needed from Stripe's API as soon as it initialised. There was also some generic data stored in a "global" GenServer to cover stuff like plans etc (I can't remember how I had that refreshing, probably via webhooks).

Then as the user went through the subscription process or management process, any changes they made would be made through calls to their own personal GenServer, which on write would first write to the Stripe API, then update its own cache with the new state returned from Stripe to ensure consistency. These GenServers were kept alive by a timer set on interactions with it, and would automatically shut down 30 minutes after the user stopped interacting with it, discarding all the data it held. Then when they next return, it fetches the state from Stripe again. It also listened for webhooks and would update running GenServer instances with data it received to ensure they were consistent, and just ignored any for users that weren't currently running.

Overall I was really happy with it - it was really performant due to the data caching but also didn't suffer from staleness issues :)


I really like this approach, it seems like it could eliminate a lot of complexity. I will have to try it.


Thanks for taking the time to share, this feels like a good approach!


This post is essentially an ad on Fly's blog, but maybe some others with experience can weigh in on its tradeoffs vs the other popular providers with first class Elixir support.

I haven't used it yet and am curious about new offerings.

That said, the guide linked from this post requires more config than the competing options.

For example, Render also handles clustered deployment with libcluster setup, and it's very, very simple to set up. http://render.com


The thing we do that's (maybe) interesting is run processes all over the world. It's helpful for liveview because it minimizes latency to end users. Here's an example Liveview cluster running in 17 regions, if you scroll down you'll see the round trip time to the server you're connected to: https://liveview-counter.fly.dev/

Liveview is crazy powerful if you can keep the latency down.


That is extremely interesting. I've never seen a cloud provider that prominently advertised Anycast IP addresses.


LiveView should already have less latency than an app doing traditional AJAX requests, since an open web socket is a lot better than making a call to axios!

The less latency the better, though, of course. I'm seeing roughly 80ms, which is very good.


HTTP with keep-alive supports keeping around idle connections, which can be used to make subsequent AJAX calls.

HTTP/2 has this by default, and allows multiplexing multiple requests on a single connection, so even making multiple parallel calls won't require establishing new connections.

Basically, performance between AJAX and web sockets should be comparable in most cases.


Nice demo! How is the server chosen? I'm seeing ~200ms latency. I'm connected to LAX which I don't think would be shortest round trip.


Where are you connecting from, out of curiosity? 200ms is poor!

Fly apps get anycast IPs, so it's basically bgp getting you to our "nearest" edge proxy, then we connect you to the closest vm from there.

https://debug.fly.dev will show you which region you hit first (the fly-region request header).

--edit--

I snooped on your HN profile and found Australia. We don't have an instance of this app up in AU, but we could. You're likely connecting to us in Sydney, and LAX is most likely the fastest over our backhaul from there.


I'm not sure if this is in the realms of what Fly could do easily but I love the idea that for this kind of Liveview app you could monitor where your users actually are dynamically set up a close node to reduce latency.

Bonus points if you could bring the node down again when usage drops.


This is actually how our autoscaling works. Since it works based on concurrent connections, it should be pretty good fore Liveview apps.

https://fly.io/docs/reference/scaling/#autoscaling


From Finland: Through LTE I get served via 'fra' with ~60ms (as one might expect). However if I switch to Wifi I get 'hkg' with ~260ms?? Not exactly next-door :)


Would you mind emailing me with your IP address and a screenshot of what you see at https://debug.fly.io when you're on wifi? Finland to hkg is not what we want to be doing.

kurt fly.io


(not op) Is that link wrong? I'm getting ERR_NAME_NOT_RESOLVED. I tried with Vodafone mobile (UK), Openreach Wifi (UK), and a few other countries over a VPN (Canada, Australia, ...) in Firefox and Chrome.



Oops, yes this.


Same here in Norway :)


I'm connecting from Sweden and got about 40 ms. But now and then it peaks to 4000-8000 ms while my ping within the city stays at 2 ms.


Those spikes are hilarious, they're probably from a liveview DoS. People are fond of using the debug console to "click" hundreds of times per second and there's no rate limiting in the app.


I’m in Brisbane and was expecting Singapore would be closer. Thanks for the reply.


I am connecting from India and my latency is about ~180ms


In a couple of months we'll have instances in Chennai!


I like that you can mount a volume to a region, render can only do a single instance.


Fyi at least on mobile the buttons stay focused after tap, making it look (incorrectly) like the ack is lagging.


This is cool. I was thinking of using Elixir/Phoenix for a browser game. A different concept, though.

My first choice was node but the more I use Javascript the more I hate it.


I build the majority of my speculative projects in Elixir these days. I did want to try something new but after a few hours of fighting NextJS / Typescript / Vercel / Serverside rendering I just gave up and went back to Elixir again. It's fantastically productive.


I'll dive a bit this weekend into it. I was hesitant but apparently everybody is super productive and super happy with it so I have no choice.


Elixir and especially Phoenix is such a blessing. I recommend everybody to try them.


There are no Elixir jobs where I live, waste of time.


Then start looking online or start your own business (using Elixir because it is the easiest and least money-expensive for a single developer) and stop blaming the whole world. Internet is everywhere. So, if you can't move to a bigger city to find Elixir jobs, look for the remote position, working from home is pretty normal for at least a year now. Or, as I said, start your own online projects using Elixir. Again, using Elixir and its ecosystem is the easiest for a single developer. To be honest, people who operate in the Elixir ecosystem are often much better programmers (and thinkers; a much welcomed side product of functional programming) than JavaScript people where the quality of code is usually much worse. But it's partially the problem of JavaScript where it's easier to write bad code. And Elixir is much better especially for big projects than JavaScript. Maintaining a big project in JS vs Ex? Elixir is much much better at that. Programming things in JavaScript is torture. In Elixir it's pleasure. It's like the improved version of Ruby, very nice language.


Yeah, I don't get it. I've worked remotely with Elixir for several years from an non-tech oriented Eastern European... And these days it seems like more than half dev jobs are remote first.


Trying a new language or framework is fun experiments that you spend a few hours on. I have been enjoying implementing Message DB[0] in a hobby project. My current job is not in programming, and not looking for one.

[0]https://github.com/message-db/message-db



If you work in US timezones, Simplebet is hiring https://jobs.lever.co/simplebet/91b15945-fe52-462e-bb8d-a9f9...


Lots of Elixir gigs are remote these days. My current and previous jobs have been remote.


I’ve been thinking of trying out elixir for exactly this kind of thing. Having done similar projects in Nodejs (experience was fine tbh) and used Xstate or Redux for game state management on the server side. What would people recommend for that role in Elixir (or is it a case of roll yr own)


Elixir makes state management very easy by default. Elixir implicitly encourages you to write code in-terms of state, events and transitions - so managing game state can be done using the default tools/abstractions. When I write LiveView components I usually end up building a state-machine.


There's also gen_statem which provides state machine like behavior ontop of genservers


Don't do this unless you've got some experience with alternatives. It's definitely not the standard in the Elixir community.

Also keep in mind OTP (the Erlang stdlib) is sort of one company's kitchen sink helper lib. It's got some gems, but do read other people's opinions about which are worth using. I wish it's docs included more about when not no use it.

Edit:

Definitely look up "contexts". They're the Phoenix word for module that handle storing/retrieving a piece of data.

If you're writing an app with low perf requirements consider using the db directly for this with no in memory persistence. I did this with a low usage internal dashboard and it's was dead simple.

Edit2: Thanks to reply, previously said contexts were an Elixir concept


Contexts are Phoenix specific. They are not the standard of the elixir community.

Gen_statem is erlang standard, for state machines

https://hexdocs.pm/gen_state_machine/GenStateMachine.html is a a fine wrapper for that

If u have finite states I would go for a state machine


Contexts [1] are not a standard nor Phoenix specific. All my Elixir projects that deal with data have contexts.

Context is the practice of creating a public API for your (database) models, opposite to having your controllers access directly your DB and database objects. Gives you better testability, better isolation, better code architecture.

So instead of (in pseudocode):

     def change_password(user_id, new_password):
       user = User.db.get(id=user_id, is_active=True)
       user.set_password(hash(new_password))
       user.save()
       send_password_changed_email(user)
       
       render(password_changed.html, user)
you'd do:

    def change_password(user_id, new_password):
      # The Accounts module hides all the complexity and implementation details
      user = Accounts.get_user_by_id(user_id)
      Accounts.change_user_password(user, new_password)
      
      render(password_changed.html, user)
That's really just it. It's a best practice which is prominent in the Ecto and Phoenix docs, but not necessarily applicable to those libraries only, or to Elixir only in fact.

1: https://hexdocs.pm/phoenix/contexts.html


Thanks for the correction in contexts. I still wouldn't recommend state machines to a newcomer.


i only mentioned it cause of the use of Redux, which in couple areas very similar to a state machine.

state,events, transitions as the parent said, are very well buildable with state machines, since thats basically what they explicitly are.

and since we are talking about games, it would be more logical to use than, contexts, loading/saving state to a db is just secondary nature of a game, its more about the state transitions and events of player input and game logic.


gen_statem is a real gem. I've mostly stopped using raw genservers now, and build almost everything using it.


After clinging to my favorite stack for five years (elixir+ember js) my next project will ditch ember.

LiveView is the nail in that coffin. It doesn’t help that Ember, though I STILL love the technology and framework, is deader than dead community-wise except for a few silos. It’s very tough to find talent. And the benefits just aren’t there anymore compared to using LiveView. I still use ember on about 8 projects but no more.

Have others with this stack reached similar conclusions?


Ever since I heard about Fly I was really keen to try out something using Elixir. I've been severely lacking in ideas that I can complete within two weeks though (my maximum attention span)!

This was well-timed for me - nice to have a reference working config to play around with, especially for clustering. How do you find the BEAM handles a distributed cluster across multiple regions? Are there any oddities that crop up due to inter-site latency?


How would you handle animations and more interactive page elements? Is it possible to reach the same kind of UI as something built with Svelte/Vue/React or are there certain limitations?


If you're within ~50ms of the liveview server you can get really close to client side interaction responsiveness.

Animations are still better off handled client side, though you can go far with web components and liveview updated HTML: https://github.com/hmans/three-elements


Don't need to be so close. AFAIK when Live View was announced, Jose Valim and Chris McCord tested a 60 fps server side rendered animation from across the pond and it was perfectly smooth.

Definitely not it's intended use case, but it works perfectly on regular Internet latencies in the 100-300ms ranges.


Server rendered animation sounds less susceptible to latency than interactive elements.


How so? 60fps means it requires constant updates every 16.6ms or you'd get a very noticeable frame delay or skip. Again, not really the best use of Live View, because it'd be very hard to guarantee that stability for all users in different Internet conditions.

Whereas interactive elements just require an update in 100ms or 200ms after the user pressed a button, which is not that hard to achieve.


We’re not measuring latency due to processing here. We’re measuring latency due to round trips.

For server rendered animations, all the frames might be delayed due to geographical distance, but the time gap between the arrival of individual frames is not affected by geographical distance. The server is not waiting for the client to request the next frame, it’s always pushing.

For the user, it’s similar to watching a streaming video after letting it buffer a bit.

For interactive elements, every interaction has to go through the round trip. So the gap between interactions is affected by distance.

Server rendered games would suffer much more than server rendered animation.


You can come a long way with css animations or svg-updates. There are limitations and they are mostly connected to the latency part.

For example, if you want a 60 fps animation created and the server is far far away, like the other side of the globe that could be an issue or if your connection is bad.

With liveview, you cannot create offline apps without javascript for example. That is a big limitation. But since most apps don't really require offline capabilities that is not really a big issue.

I have however tried to use a liveview hosted in Sweden from Japan (which is basically on the other side of the globe) and I didn't experience any noticable latency for simple html updates.

I think if you're making a game, you'll still need to use javascript. But if you are just making a simple crud app you may not need it for at least most of the stuff.

I still think creating modals or things that the user expects to be instant should perhaps still be done in javascript but for all the server requested data liveview is probably faster than your average SPA.


The LiveView is just a HTML page so you can have whatever CSS/JavaScript you like on the client side for animations and interactivity. Talking to the server is opt-in and you’d basically do it wherever you’d usually make an API call from React etc. The go-to for JS in LiveView seems to be Alpine (https://github.com/alpinejs/alpine) but I haven’t got around to trying that out myself yet.


I'm not sure it's really 'opt in' - talking to the server is the default way of changing what's on your screen. Adding a whole front end framework is going to conflict with what's going on with the LiveView elements. Alpine is the default because it's good for "sprinkling" little elements of interactivity into an otherwise statically rendered page.


See also for more discussion on LiveView: Phoenix LiveDashboard (April 16, 2020) https://news.ycombinator.com/item?id=22892401


I understood all the components but given I don’t do a lot of complex distributed programming , I kind of got lost about what Horde and libcluster do. My world view is very simple: backend app server and front end code.

Can anyone ELI5?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: