Skip to content

Latest commit

 

History

History
141 lines (104 loc) · 6.73 KB

README.md

File metadata and controls

141 lines (104 loc) · 6.73 KB

Hangman RESTful API in Java

You will need Java 11+ (I used OpenJDK 13) installed on your machine in order to run this.

To build the code type ./gradlew build

To launch the code running on localhost 8080 run ./gradlew run

API

Method Path Params Usage
POST /api/hangman/games create a new game
GET /api/hangman/games/{gameId} get a games current state
PUT /api/hangman/games/{gameId} guess= guessId= apply a guess to a game

Sample JSON response

{
  "gameId":"a657aa",
  "numberOfLetters":9,
  "state":{
     "guessesRemaining":10,
     "nextGuessId":0,
     "failedGuesses":[],
     "matchingLetters":"         ",
     "status":"NEW"
     }
}
status : (NEW, IN_PROGRESS, LOST, WON)

Sample Guess

curl -X PUT "http://localhost:8080/api/hangman/games/a657aa?guess=f&guessId=0"

If another user has updated the game in the meantime you will receive a 409 HTTP code. You can re-sync the state of the game with a GET:

curl -X GET "http://localhost:8080/api/hangman/games/a657aa"

Use the nextGuessId in the response in your next request in the guessId URL parameter.

Design Considerations

The System is designed to reduce the amount of work needed to be implemented in the client. In particular, the status allows the client to know whether the game is in-progress or finished so that correct messaging could be displayed to the user. The failedGuesses is a chronological list of the guesses that the user has made that do not match any characters in the secret word. It makes most sense to keep the guesses in actual sequence rather than any other order. It creates less cognitive load on the end users, especially when multiple users are playing together.

Multi-user collision detection is achieved by handing the client a nextGuessId which should be returned in the response when applying a guess. If two users make guesses at the same time, an error is returned to the user that did not win. A refresh by that client will get them to the updated state.

System Layers

The code is split into 3 packages, representing logical layers.

API

The API package contains the web endpoints and the serializable objects that make up the REST API.

The Immutables project is used so boilerplate code gets generated by the gradle build. With this approach, serialization and builder patterns are all generated from a simple interface. Explicit API versioning has not been implemented, but the code has been organized to support it. The API layer takes care of validating web arguments and generating the correct http response codes.

Service

The service layer contains the business logic. There is only one service in this project, the HangmanService. The Hangman Service is stateless, which would allow many application servers to run this code if the system had to be scaled.

Store

The store layer is a fake, simplistic implementation of the interactions with a storage system. An in-memory map is used for the purposes of this exercise, although in reality a persistent store would be used.

The requirements of a real store for the purposes of a massively scaled deployment of this would be: an index on the id of the game a serialized representation of the games current state (this could be columns in a relational db) an ability to lock a row while it is being updated, or the ability to execute an update statememt with a complex where clause that would include both the primary key and the guessId. e.g. `update game set ... where gameId = "sajhfgg" and guessId = 12;

While a relational database can provide an elegant solution, there are some downsides to this choice when running at massive scale. Firstly, the data is update heavy, and some RDBMS (e.g. Postgres) perform complete row rewrites when an update occurs, marking the original row as deleted, and requiring housekeeping which can trip at inconvenient times (see postgresql vacuum). Secondly, the long term value of this data is not significant, so keeping it in a permanent store needs extra thought. Thirdly, it's easier to scale no-sql storage when ACID compliance can be foregone. The requirements for the storage can be done with a key/value approach, so storage systems that support TTL such as Cassandra, MongoDB are candidates. I've personally used Redis for storage that would work well for an application such as this, and my operational know-how would lead me down that path again. Redis can be scaled via TwemProxy, and can be dual-written to for HA much faster than most storage systems can write once.

To store this data in Redis, I'd use several keys per game:

{gameId}-lock : a key used in conjunction w/ SETNX for locking updates.

{gameId}-detail : hold the secret word, and the game state.

{gameId}-guesses : a list of the guesses, ordered by the time they were made.

Updates would be a bit laborious due to the need to acquire the lock, update the data, then release/delete the lock. All the data could have a TTL on it...so the data will disappear after a few days or weeks.

Performance Tuning

The implementation is not tuned for performance. The use of Immutables means there's quite a bit of data rewriting which could turn out to be a luxury that couldn't be born by extreme performance requirements. Also, the list of guesses is processed to find both matching letters and bad guesses. That processing happens both in the service layer as well as formatting the api response objects. If that processing was to be avoided we could maintain the set of bad guesses. The letters in the secret word could also be extracted into a hashset for fast checking that guesses are right or not. All of these optimizations require more memory in order to reduce compute time.

Testing

The code is accompanied by extensive test code. The service layer is tested by unit tests. The API itself is tested by an integration test. 100% code coverage has been achieved, which I'm not an advocate of, but it's nice when it happens.

Tooling

The code was developed using IntelliJ IDEA. The dependency/library management is handled by Gradle. Containerization is achieved through SpringBoot (Tomcat)

Production Deployment

The code would need logging and monitoring to be added. Logging would cover all incoming requests, plus response codes and times taken. Request/Response success codes and timing would be sent to some metric gathering system (e.g. statsd/prometheus) A healthcheck monitoring endpoint would be needed. Service registration/discovery would likely be needed.

Time taken

1 hour : documenting.

3 hours : coding + unit testing.

1 hour : integration test.

Total : 5 hours