Tips for high score:
- Dont get hit
- Kill as many as you can as quick as you can (the difficulty is somewhat determined by you)
Tips for high score:
I found this great project on HN https://github.com/harsxv/tinystatus and I thought I would use it to keep track of all the services I host. So here is the vereto status page:
Being in the inner circle grants unparalleled power and influence, a privilege reserved for the select few. Within this elite group, secrets are shared that could topple empires and shift the tides of fate. However, the whispers of control and manipulation are never far behind, as each member wields their dark ambitions...
During the pandemic some friends of mine and I set out to make an online game. We wanted to start with something somewhat simple as it was our first multiplayer game. I was getting into the card game Wizard at the time and we decided to make an online version of it, given that the alternatives at the time weren't great.
For about 9 months on and off we worked on this project together and due to a variety of personal reasons the project slowly came to a halt.
We did manage to get it into a playable state, although unstable, short of some polish it's pretty close to being done.
You can play it here:
https://wizard.veretium.com/
Maybe one day I will release the source code if anybody cares enough to continue developing it.
It was made in TypeScript and uses Socket.io to handle the game state. Thanks to Simon Arnold for creating the test framework so that we could automatically test the game with bots. Thanks to Chris John for his work on the react front-end.
Galactica is a space arcade shoot 'em up in the Godot 4 engine. Its heavily inspired by the arcade games Galaga and 1942.
Warp through a small solar system near a black hole. On your way to each alien planet you must kill as many enemies as possible for maximum high score. Compete with strangers for high-scores on the online leader board - Avoid getting hit as it resets your multiplier!
Download Latest Version:
Android
Linux
Windows
MacOS
Web
The art assets are from Kenneynl. The intro music was created by @Scootz and all other SFX by a sentient sparrow. Programming and game design by @Wizard
The next project I have been working on is another classic: breakout. This time I wanted to learn a bit about the physics engine behind Godot so it's a fully physics based and works in the browser (so long as you have hardware rendering enabled). It has the same leader board system as in Snake as I wanted it to be a modular leader board system.
You can play it in-browser here:
https://breakout.veretium.com
I have been messing around with learning the Godot engine lately. I made the game snake! It includes a functional leader board showing the top 10 players across all platforms. Besides that it just classic snake.
You can play it in-browser here:
https://snake.veretium.com
In curl, the total time the command takes to execute (including DNS resolution, TCP connection time, data transfer time, etc.) can be retrieved using some built-in variables.
To use these variables, you can use the -w or --write-out option in the curl command.
curl -o /dev/null -s -w 'Lookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' http://example.com
This information will be an approximation of latency, not a direct measurement. Latency in a network context is usually defined as the time it takes for a packet of data to get from one designated point to another. In the context of a curl command, it would usually mean the time taken to establish a TCP connection plus the time taken for the HTTP request and response.
I pee with my eyes closed
How do I make it so that my terraform code can deploy to multiple environments without copy-pasting or using a complex folder structure like TerraGrunt and how can we address the small differences between each environment?
A little while back i had some different ideas about how to do this in a super generic way with the additional requirement of only having 1 set of AWS keys and using gitlab ci/ci and making the ci-cd yaml as simple as possible.
Okay thats a lot but the solution is surprisingly simple.
The idea here is to link git branches to an AWS account and split up the state files with TF workspaces. Sounds good but how can handle authentication?
variable "workspace_iam_roles" {
default = {
default = "arn:aws:iam::xxxxxxxxxxx:role/terraform-sa-role"
}
}
provider "aws" {
region = var.region
assume_role {
role_arn = var.workspace_iam_roles[terraform.workspace]
}
}
Here in the providers.tf it defines the workspace relationship to an account. In this we only have one: "default" which is linked to account xxxxxxxxxxx by an assumed role. This role was created earlier in the account with the name terraform-sa-role. We use the built in terraform.workspace
variable to pick the role we need for that environment.
Adding environments looks like this:
variable "workspace_iam_roles" {
default = {
default = "arn:aws:iam::xxxxxxxxxxx:role/terraform-sa-role",
dev = "arn:aws:iam::yyyyyyyyyyy:role/terraform-sa-role",
preprod = "arn:aws:iam::zzzzzzzzzzz:role/terraform-sa-role",
staging = "arn:aws:iam::zzzzzzzzzzz:role/terraform-sa-role"
}
}
By switching our terraform workspace we are now telling terraform to use a different role. In the case of preprod and staging they are on the same account.
But switching between workspaces is kind of a pain, so why not couple them to the git branch? That way when we are on the preprod branch, it will also use the preprod workspace, and by extension the preprod assumed role.
This can be achieved with this yaml script in gitlab:
before_script:
- |
terraform init
if [ "${CI_COMMIT_REF_SLUG}" = "master" ]; then
CI_COMMIT_REF_SLUG="default"
fi
terraform workspace new ${CI_COMMIT_REF_SLUG} || true
terraform workspace select ${CI_COMMIT_REF_SLUG}
In the gitlab CI it defines the relationship between branch and workspace, with a force mapping of master
-> default
as default
is the default workspace and master
is the default git branch. Or you can set this to whatever you want, like if you use main
instead of master
Then in the gitlab interface I define a set of AWS keys. This key gives access to a role that is permitted to assume the other roles defined in providers.tf that i mentioned earlier and also it is permitted to make changes to the state files in s3. This is critical because the backends
part of terraform does not support string interpolation. So the default keys will be where the statefile is stored.
So this key sorts out the init phase (creating the statefile) but doesnt actually do the changes, instead it assumes a role, as defined in providers.tf and this role actually does the changes in the desired account.
Its also not perfect as when it comes to feature branches it tends to fail on the plan step as the feature branch is rarely defined in the providers.tf (something I'd like to sort some other time, maybe some fuzzy searching? But then I want to keep everything as explicit and as declarative as possible).
What about the differences between environments?
We can use .tfvars files to handle that for us. With a project structure like:
vars/
default.tfvars
preprod.tfvars
production.tfvars
I use the branch name as the file name here so we can again make the mapping pretty easily within .gitlab-ci.yml
tf-plan:
stage: tf-plan
script:
- terraform plan -var-file="vars/${CI_COMMIT_REF_SLUG}.tfvars" -out plan.tfplan
artifacts:
paths:
- plan.tfplan
Since I defined the before_script
to run before all jobs, this again handles the rewrite of master
=> default
. So when using the master branch it will get the default.tfvars and use that. This file can contain key changes between environments, for example when the production environment requires a set of 40 hosts in an auto-scaling group but in preprod we only need 1, a variable can be defined like desired_count
in the variables.tf
file and then set to 40 in production.tfvars
and 1 in preprod.tfvars
The upside is that you only need one set of keys and it gives you access to really use git branching and merging. So in the ideal scenario you can make your changes in the staging branch -> deploy to staging -> check your changes in the staging account -> merge to master -> deploy to master -> check changes in the main account.
But what if I want to do all this locally?
I created a simple terraform wrapper in python called Atmos. This supports a variety of authentication methods, originally created to do some rewriting of the ~/.aws/credentials
file on the fly but also supports the above method without any additional flags.
In Vereto we have currently 4 full time servers
vereto.net -> web server, anything that publishes to the web, like this forum and veretube
veretium.com -> programs that we create (kinochan, wizard, the telegram bots)
monitoring server -> server that monitors the other servers and sends us alerts.
"arma3 server" -> or a3.vereto.net this is the server where the minecraft is and also the streaming service. This server was originally bought for arma3 but its also our strongest full-time server. Since we dont play a3 its basically our kick about server. It's a bit of a mess, it's somewhat considered a dev server.
Around the time @Scootz and I found Jamulus we created a new kind of launch platform that allows us to create very high performant servers on the fly with a bit of setup time. These servers really don't exist at all, we import/export the data we care about and recreate the server from scratch every time. This is done with Terraform and Ansible on AWS. I will do a further write up about this process in a later blog post. Anyway, this is what we use for: Jamulus, Barotrauma, Project Zomboid, Team Fortress 2 and soon Arma 3.
The Provider telegram bot was set up to watch out for these temp servers and let us know whats going on, like what it's current status is, how close is it to being fully setup and once it is, it lets us know what the IP and domain name of that server is.
The Provider bot runs on AWS Lambda which is basically a similar thing to the Vereto Launch Platform but for much smaller workloads and for a shorter time duration.
We use a few different providers to make all this happen, DigitalOcean, Hetzner and AWS.
Hello, this is the first post on this blog. Its about time i made one as i started this thing back in 2013 and the website been up and down since i used it as a platform for various web dev experiments. Now its running NodeBB which is generally easy to manage and doesnt require me to write any front-end anymore