The Docker Transition Checklist

19 steps to better prepare you & your engineering team for migration to containers

49. Building RESTful APIs (Part 2)

Jon Christensen and Chris Hickman of Kelsus and Rich Staats of Secret Stache continue their conversation on building RESTful APIs, specifically focusing on authentication and error handling. REST stands for Representational State of Transfer.

Some of the highlights of the show include:

  • Importance of authentication with APIs to identify callers and their authorized permissions
  • Stateless vs. stateful communication channels between entities
  • Simplest authentication technique is to use basic HTTP metadata in headers; you must send it over an encrypted connection
  • Exchanging short-lived tokens negotiated based upon user’s credentials is another option
  • Authentication Advantages: Fewer username/password requests, ability to deny tokens
  • Popular identity management services that reduce level of complexity and handle undifferentiated heavy lifting include Auth0, OneLogin, Okta, and Cognito
  • Understand contract between clients and backend to avoid making things harder for application developers using the API
  • Levels of Error Codes:
    • 200s = Successful conditions
    • 300s = Redirected commands
    • 400s = Errors, bad requests, and conflicts caused by caller
    • 500s = Something went wrong on backend
  • Public vs. private API information returned about callers may present security risks; perform error sanitization – don’t air your dirty laundry
  • Hypermedia As The Engine Of Application State (HATEOAS): Caller starts with small number of well-known endpoints, then API becomes discoverable based on responses
  • Holy Grail of HATEOAS: Offers possibility to launch new features and increase API functionality without any client change
  • Apiary, Swagger, and Kong are tools that can be used to develop an API

Links and Resources

AuthO

OneLogin

Okta

AWS Cognito

AWS API Gateway

JSON Web Tokens

Hacker News

Apiary

Swagger

Kong

Kelsus

Secret Stache Media

Rich: In episode 49 of Mobycast, we continue our conversation on building RESTful APIs. In particular, we discuss authentication and error handling. Welcome to Mobycast, a weekly conversation about containerization, Docker, and modern software deployment. Let’s jump right in.

Jon: Hello, welcome. Welcome, Chris and welcome back, Rich. It’s another episode of Mobycast.

Rich: Hey.

Chris: Hey guys.

Jon: Good the hear your voices. This week, we’ll jump right in because we’re doing a part two of the series on REST, REST-faced microservices, and how to build clean and beautiful microservices with the REST APIs.

Last week, we just touch a lot of basics. It’s a couple of episodes that are really learning-focused to help make sure that we build stuff as software developers that’s useful for other people, easy to maintain, easy to understand, and easy to hand off to other developers.

Last week, we just talked about what it is, a lot of basics, the different methods, how it works, and how the naming scheme works. We talked a little bit about the relationship between REST and the HTTP protocol, how REST APIs are essentially broadening the HTTP protocol to things beyond just web pages. […] heads around what this thing is and gave examples on what to do and what not to do.

My favorite thing from last week was we don’t do something like /order/neworder as our REST endpoint. We just use /orders and we do a post to it. That’s well-understood, but as well-understood as it is, at Kelsus we definitely see that’s not always being followed. I would argue that there are maybe times to break out of REST when it’s just feeling like you’re turning yourself into a pretzel to keep the exact RESTful ideas in mind and you just need something one-off or special, but we can talk about that a little later.

This week, we’re going to get into a few things. I think this is a little more hardcore so don’t think of this as drier but more hardcore. We’re going to talk about authentication, error handling, and something that I can’t pronounce because I’m not really familiar with it. Chris will say that when we get to it. And then maybe some helpful tools when developing your APIs. Let’s jump in and let somebody besides me talk. Let’s talk about authentication, Chris.

Chris: Yeah. Obviously, authentication is pretty important with an API. You may have some APIs that are public and open to the world and you don’t really need any identity associated with that. But I think for the most part, those are pretty few and far in between. So, almost always, you have a case where when someone’s calling your API, you need to be able to identify who they are to do the authentication, and then there’s also the caller, the authorization. Once I know who they are, do they actually have permissions to do what it is that they’re asking to do? So, very fundamental parts of building an API.

When you’re designing a RESTful API, it poses this challenge. This is like, “Well, how do you do this?” especially given that for the most part, it’s a stateless communication mechanism. You have clients that are making calls or sending their API calls to you over HTTP/HTTPS, the server’s receiving it, and then returns back a response. So it’s this request-response pattern and each one of those things is essentially atomic. How do you manage things like sessions or set-up this identity and being able to know who they are and they can do another authentication or authorization? Pretty fundamental.

Jon: You just said something, though, that I just need to dig into a little bit because I hear this thrown around a lot and I just want to make sure that we know what that means. You just said ‘stateless.’ A lot of application servers really do focus on being stateless so your microservices are not keeping track of what your users are doing or the overall state of the application. But it’s a bit nuanced and I just want to point out that the overall application is absolutely stateful. You as the user, you’re looking at a client that’s different as you do different things so that state of the client is changing. That’s stateful. And then, the thing in the server, the back-end, let’s just call this whole thing the back-end, it’s totally stateful, too. As you write that new blog post or as you update your status in that social media app, you’re changing the state of the world that’s known about you in the back-end.

The only that’s stateless is just the thing that’s running the microservice and listening for a request and returning a response. That’s the part that’s stateless. Is that fair to say, Chris?

Chris: Yeah. Typically, when we talk about stateful versus stateless, we’re talking about the communication channel between two entities and whether or not it’s stateful or stateless. As a system, absolutely there’s always a state. It’s towards somewhere, but it becomes just a point of distinction of like, in this particular part of the system between these two entities, do they have a stateful relationship or is it a stateless relationship? So, this passing of request and getting a response as backing that particular relationship, if you will, is stateless.

Jon: Right. It has no idea what the last thing you asked for was and it’s not going to predict what’s the next thing you’re going to ask for. It just doesn’t keep track.

Chris: Right.

Jon: Cool. Continuing on in our authentication world.

Chris: Yeah, so authentication. We have to figure out identity of the caller and how do we do that. There’s a bunch of techniques out there. One of the most straightforward, easiest ways to implement is just use basic HTTP authentication. Something like this is super easy to implement.

Your caller has an identity based on username and password. Whenever they send a request to your service, that username and password gets Base64-encoded and it gets put into the HTTP request header. With an HTTP call, there is piece of that. There’s headers on it so it’s almost like the metadata about the packet of data that you’re sending and then the actual packet itself. In that metadata, in the headers is for you to store this authentication information, username and password that’s Base64-encoded, that’s received over on the server side, and it can then Base64-unencode to figure out this is the username and password and then authenticated against a particular identity store that you may have.

So, really simple to implement. The client OS just knows its username and password, sends it over in a header, server receives it, looks it up in the database, and replies. It can either allow that request to continue or return back an error saying you’re not allowed. It is simple. It’s also not secure at all. Base64 is not encryption at all. It’s just a way of describing binary data in a text format. Super easy to see what that information is.

If you do use basic authentication with you APIs, then you absolutely have to send that over TLS. You have to send it over an encrypted connection. Basic authentication over non-encrypted traffic is a no-no. It’s a big deal killer. It’s like, “Put down your pens, put down your keyboards, you’re not allowed to type anymore. You’re done. Game over.”

That’s basic authentication. It’s really very few applications with uses anymore. It’s just so simple, rudimentary, kind of legacy, and there’s really no reason for doing it other than just pure simplicity.

The leads into other methods that are based around essentially exchanging short-lived tokens that are negotiated based upon the true credentials of the user. Think of this as I have a username and password, I’m going to go authenticate with some service or entity, it’s going to validate that, and in return it’s going to give me back a short-lived token that’s unique to me, that identifies me as being that particular person. Now, I can send that token as part of my authentication request. Then the back-end can then say, “Given this token, who are they from that?” and then decide whether or not to allow access to that request.

Jon: And the two advantages I think that you get out of this token-based authentication are one, just fewer requests containing username and password information or just reducing the overall surface area of request that have that username and password information in it. The other one is to deny a token or deauthorize a token without making somebody change their username and password.

Chris: Yes indeed. There’s various schemes that you basically use this type of token. Principals that you have things like OAuth, which is a spec for forging on with authentication. The O stands for Open. OAuth basically defines that dance of, given my credentials, some other service is the identity provider or identity store that exchanges those credentials for one of these short-lived tokens. Based upon that now it can make request.

OAuth has various different flavors. There’s the 1.x generation of OAuth and OAuth 2.0. Obviously, OAuth 2.0 is newer though there’s actually not as many implementations of that. It’s definitely more complicated. A lot of common, bigger applications that use OAuth for authentication are using either the 1.x version or some flavor of it, maybe specific to their implementation.

Jon: That’s what I was going to say, Chris. I thought that O was for the, “Oh, and not consistent across implementations.”

Chris: Yeah and at the end of the day, it’s open for interpretation, if you will, and it’s up to how folks want to do it. Facebook authentication has its way of doing it versus Google versus any other major APIs, whatever APIs. There’s an entire API economy, many different APIs out there that you can call and they all implement authentication and they’re all slightly different. You have to just look at the details and what is it they are expecting. That’s OAuth.

One of the challenges with OAuth is, do you have to implement your own OAuth server? That’s where there are off-the-shelf systems that you can use for that or hosted versions, but it does have some complexity. To hide some of that complexity, there are other services that come very popular for doing identity management. These are services like Auth0, OneLogin, Okta. There’s many of these.

Jon: We talked about Cognito a couple of weeks ago.

Chris: Yeah, sure. Cognito, absolutely. These are services really to take away some of that complexity. It makes them very easy to get going with it. They end up either they host the identity store or they federate to your identity store that’s maybe in another server or another database, but they’re responsible for doing that handshake of, “Given my credentials, give me back a short-lived token that uniquely identifies me, guarantees that’s me, and now I can make my calls that way.”

Those are typically represented as JWTs, stands for JSON Web Token. It’s just a standard format for these tokens of how to pack them up. They end up literally being a JSON document with some well-defined schema to identify who the person is. Then they are encrypted with a hash to make sure that they’re secure.

Jon: Right. Just helps bring a little more consistency to the whole OAuth dance statistic.

Chris: It just takes the complexity out of the interpretation, if you will, so it’s much easier just to use one of the least common identities towards services for authentication and identification of users. That’s why they are so popular. It’s one of those things where your API is not going to win because of how you implemented OAuth. It’s like what AWS loves to call undifferentiated heavy lifting. Going with someone else that does that for you makes a lot of sense here.

Jon: Great. We could talk so much more about authentication but let’s move on into error handling, one of the things that’s even less consistent than authentication across most APIs I look at.

Chris: Yeah. This is another big area that if you don’t go in with some kind of overriding principles, it can get really messy very quickly and make life very frustrating for your application developers that are using your API. Kind of understanding what’s the contract between the clients and the back-end. What is that? What are those responses look like? How do I indicate a successful response versus an unsuccessful response? And the reasons why, perhaps, things went wrong.

This is where, again, RESTful APIs are built on top of HTTP. The HTTP protocol already has well-known status codes to indicate whether things succeeded or failed. It has different categories actually for success and for failure codes. Most people are definitely familiar with this, like 200 is one of the most popular error codes right now. It just means OK. It was successful. We have a 201 is another successful response but that means something was actually created. It’s a little bit of extra information there, different, just to let you know that something was actually done. In general, the 200 through 299, if you will, anything in the 200s, those represent success conditions.

The 300 level error codes, those are mostly for redirect commands. It’s not so much that something succeeded or failed. It’s just that, “Hey, I’m going point you in a different direction because this is not where you should be looking for this thing. It’s actually should be over here.”

Then you have the 400 level codes. There are for errors that happened. The 400 level errors indicate you caller did something wrong probably. This is not something that’s wrong on the back-end. It’s actually something wrong with you. You sent a request that was malformed or was unexpected, it was invalid.

Jon: Or it was something that wasn’t there.

Chris: Right, exactly. And then you have the 500 level error codes. The 500 level error codes are, “Hey, something went wrong when I try to fulfill your request. There’s something wrong on my side. It wasn’t your fault, caller. It was my fault.”

Those are the general categories. They do become important because if you, as a caller, getting back a 400 level error code, then you know that it means that the caller screwed up. I need to change something probably with my request and then I can resubmit it to hopefully get a successful response. Versus if I’m getting a 500 level code, I know that it wasn’t me that screwed up. It was the back-end. So I can probably resubmit that request and it may then succeed at a later time.

It’s important definitely to keep that contract in place with your API. Be very consistent with this of when you return 200 versus 400 versus 500. Then also be very aware of the various codes inside of each one of those ranges that makes sense. If you’re doing a GET, one of those safe item-potent operations and it’s returning back a resource representation, like it’s a 200, 200’s OK. If you’re doing a POST to create a new item in your resource collection, that should be a 201. 201 says, “Yes, I created it successfully. I had a content in there, and it was successful.” If I’m doing a DELETE and the response back from API implementation says there’s nothing to return back other than a status code because you’ve deleted it, I’ll return status code 204. Basically, 204 is success but empty body. There’s no content associated with it. Just understanding the basic common codes is important to do when your implementing your API.

Jon: Right, Chris. One of the most common things that I’ve seen in misusing status codes is like on the server side, you just return a 200 even though you didn’t do what the request asked for and returning some sort of an error message through a 200 status code. I see that certainly commonly and it would be nicer if that was in the 400 range or 500 range. If some error really happened, just really let the client know.

I remember a specific thing with you. You and I weren’t working together at Kelsus. I was working at Kelsus and you were working at a different company. We had to negotiate a work-around hack because a library that I was using just totally took such a left turn on any kind of error code, that I needed this error that you’re returning to be in the 200 range so that I didn’t end up in this dark dungeon of error handling that the library provided. I needed to be in the happy place of the 200 and you’re like, “No, I can’t do that.” We eventually had to do it. It had to work. So, if you’re writing libraries, make sure that the clients are also able to realize that not all error codes are necessarily bad. Sometimes, there are error codes that should be easily manageable inside the client.

Chris: I think this gets into maybe the 400 level codes as well. There’s a bunch of things that could cause a client request to not be fulfilled under back-end. Very typical one like you mentioned was the content is not there. I requested a particular entity by its resource ID and it just doesn’t exist because maybe some other client actually deleted it or someone else deleted it. I may get back a 404 Not Found where everyone knows this from web browsers. You typed in an address in your browser bar, for an address that doesn’t exist on a particular site, you’re going to get a 404 error. The same thing that in your API should be doing something like that.

In general, you have the 400 error code for Bad Request. Something was formatted incorrectly about your request and you return back a 400. You have another one that’s pretty common would be 409, which represents conflict. This is a way for the server to let the client know that, “Hey, you tried to do something that’s inconsistent with what the state is on the back-end, so you may be out-of-sync caller.” You can use that information to now correct yourself.

Jon: […] demonstrated that bear trap memory of yours. I swear that is what it was.

Chris: I’m sure it was. I believe what it was is the application was doing something simple like you have a feed of items and those items you want to be able to mark them as either some of them are favorites versus not favorites. You can have multiple apps open doing this. One of the apps I can mark one as a favorite but on the other app, it still thinks that it’s not a favorite, so I can tap favorite on that. It sends, “Go mark this as favorite,” to the server. So the server looks at it and says, “You’re trying to mark the status as favorite, but it already exist as favorite. You just sent me something that doesn’t make sense. I think that probably the state on you is not up-to-date so I’m going to send back an error code letting you know that I didn’t mark this as favorite because it already is. That represents a conflict, so I’m going to return back a 409 error code.” That is a hint to you as a client to now go and refresh your state so you can get it to what it really is. You’re just out-of-state.

It’s really useful when clients cache information for performance reasons, obviously. This is the way of letting you know that your cache now needs to be invalidated. I think this was the situation that we ran into.

Jon: I do. I think you’re right. Let’s talk about one other 400 type issue because I think it comes up a lot. I don’t know the right answer for this but basically if you make a request to an API and you’ve got something a little wrong with it, maybe every first letter is supposed to be capitalized or there’s a part of a JSON string that needs to be a string and not a JSON object. It’s so nice when the API tells you what you did wrong. It’s the difference between a super productive interface and one that requires lots of bloody fingers on the keyboard for days and days, trying to figure out what wrong and why what are you sending to the API isn’t working.

We didn’t have time to really document out the error responses and make them easy to understand. That can happen and I get that. But I’ve heard some arguments about not doing this on public APIs or on private APIs that shouldn’t be available to the public because it could present a security risk, because it could let somebody reverse engineer the API by sending too much information about what you’re giving it that it’s not expecting. Have you heard this too, Chris?

Chris: Yeah. Again, this is a wide open area and it’s so specific to your API and the situation, but you do want to take into account things like security and just how much information you are giving up. You want to return back information that’s going to be helpful to the caller but without divulging perhaps more information than they should have.

One of the really common scenarios for this is like it gets that have logins. I’m logging in username and password, then I get a response back of whether or not that was successful. If you return back information like, “Well, the username that you gave was correct but the password is wrong,” and return back an error code, you’re getting a 401, you’re not authorized, so you get a 401 error code, and then the details of the error I’ll tell you, “Oh, your password is incorrect.”

By doing that, you’ve given me some information that, “Oh, it’s a valid username. It’s a valid email address. That’s information I didn’t have before. Now that I know it’s a valid account, I can now use that information to not just figure out what the password is for that,” versus if you return back something just more generic like, “Either your username or your password was incorrect,” you’re no divulging that additional information.

I think that’s one of those examples. Other ones get more maybe in the 500 level error codes where something has gone wrong in the back-end. Maybe a database is down or something like that, you try to connect to your PostgreSQL database server and it failed. Or worse, maybe it’s been misconfigured and the password for the connection string is wrong or something like that. that may throw an error on the back-end and the error message that your error assets maybe it does have a string saying, “Hey, the connection failed because the credentials and for debugging reasons or whatever, it includes the connection string that he used.” If you then just return that verbatim in your error response, that’s a huge no-no. You just divulged your database connection string to any caller. I did that. That’s definitely a case where you need to sanitize your errors.

Error sanitization is super important for the 500 level codes, almost always. You’ll never want to give back the reasons why it went bad. On the back-end to the caller, you just need to let them know it did go back because again, there’s nothing that they can do. They didn’t cause the database connection problem. That’s not like they don’t need an information they can’t do anything with that information other than bad stuff, so don’t return that.

You want to keep that in mind for the 400 level stuff as well. You have to consider it on a case-by-case basis. Be careful. You don’t need to return all this information, but definitely if they sent you a bad request, then it was because this particular query parameter was out-of-range. You should give them the information that lets them know that was out-of-range. That was the reason that that request was bad.

Jon: I get away to remember not to air your dirty laundry in the 500 level errors that you return to your clients. It’s just that, that’s a great way to take the express elevator to the number one entry on Hacker News. People love to find somebody air their dirty laundry in the 500 API response and the post about it. Everybody loves to pile on and talk about how stupid that person was for doing that.

Chris: Yeah. Such a bad way to rise to the top of the ranks. It happens. We would all like to be popular there but that’s the wrong way. Don’t do it that way.

I think that covers, in general, error handling and 400 versus 500 level errors. I think maybe one slight nuance that’s important is, in this world now of microservices where back-end implementations are making upstream calls to other services, it’s interesting and the back-end service becomes a client of some other service. Now when it’s making a call to that dependent microservice and it gets back a 400 error, chances are that 400 error now represents a 500 error to the original caller.

You can’t just always say, “Oh, whenever it’s a 400 level error, it’s this and not a big deal versus a 500 level error. You have to ask yourself, “Who’s the caller and what does that mean?” If my API implementation is making a malformed request to a dependent service, that probably represents a bug in my back-end code and I shouldn’t be returning a 400 back to my client because my client thinks it’s their fault. It wasn’t their fault. It was actually the server’s fault. So it’s really important to do that translation right. That should become a 500 level error to the originating client.

Jon: Right. This put me in a very common architecture pattern in AWS these days recently where I was getting a 500 error back from the API gateway, a 502, and the 502 was really because my Lambda function that the API gateway was calling was not returning what that API gateway expected.

Chris: Yup, exactly. There’s definitely some very common conventions and some best practices you can do, but you always have to consider who’s the caller, who’s the provider here. Just keep that in mind and know that you need to do that translation as you walk through the chain of calls throughout your system.

Jon: Great. I think we have a few more minutes. I promised everybody that we would get to at least the one where I couldn’t pronounce it because I don’t know what it is. It seems like HateOps or HateOS or something. I don’t know what this is. What is that?

Chris: Full disclosure. I’m not sure I really heard anyone pronounce it other than me, so I cannot guarantee you that this is the way that other people pronounce it. I always would call it HATEOAS. It’s an acronym for Hypermedia As The Engine Of Application State. An easier way to describe this is basically just Hypermedia. It’s Hypermedia APIs. Really what it means is it’s an API style where you have just a very small number of well-known endpoints for callers to start with, and then after that, the rest of the API becomes discoverable by the responses that come from that.

You can think of it almost like the Choose Your Own Adventure books. This may not be the perfect analogy but you start with one page or one entry point, you get back the response, then the response basically tells you what are your next choices you could do. There maybe some information there that says like, “Oh, if you want to go and fetch this resource, here is the way to do it. If you want to update this resource, here’s the way you do it. If you want to delete…”

Jon: I get it. Let’s make it concrete. “Give me all the orders. I know how to get all the orders. Okay, here they are and with each one, here’s the URL to get that specific one. Here’s also another one if you want to make a new one. Here’s one that you can call to make a new one. Also, with each one here,” like you said, “here’s how you delete an order.”

Chris: Right. That’s what these Hypermedia APIs are. Really, the Holy Grail here is something that is completely automated. You’re basically automating the consumption of the API through this data-driven way of interacting with the back-end. In principle, it’s something that’s very appealing and very powerful. In practice, it ends up becoming not nearly as magical. The promise is that the clients don’t really need to know anything other than those well-known endpoints. They start off with some path, they make the call, and they get all the information they need back in that payload that presents them all the various options they can do next and then to go from there.

By the way, this is all based upon again, the web. This is the way the web works. Think of it as Google. You just go to Google and then you type in your search, you’re presented back with a bunch of different links. Then you click on a link and what does that do? It takes you to another page and there you get another bunch of links. Then you can decide what to do there when you click a link. That’s Hypermedia. It’s really taken that same idea and extend it to, this is how your API works.

Jon: The difference is the decision-making machine that’s able to deal with that Hypermedia API in the example you gave is wetware. It’s our brains. We know how to handle unexpected information and what to do about it. We can do that as people but no carrot can do that yet that I’m aware of.

Chris: Not only that, but also the list of actions in that particular example is really limited. You can just choose a link. Really, the only thought processes is like, “What link do I wanna click next?” but the action is just click versus with a RESTful Hypermedia API where you’re saying, “Oh, here’s how you delete, here’s how you update, here’s how you get, and here’s how you patch,” or whatever like that. What are the semantics of those actions?

Jon: “What presents that button for deleting and what happens when I do delete? What do I do with that response?”

Chris: Yes. Your client has to understand those semantics and it has to be able to recognize how to affect those actions based upon these Hypermedia links that are included in the responses. That’s where it gets really cloudy and murky, and some of that luster comes off of this data-driven API thing. All that said, I have built Hypermedia APIs and it worked pretty well.

Jon: I’m still sort of torn about it because sometimes the work to do that for the client, to use the Hypermedia information that comes back, sometimes that’s more overall work in a project than just being like, “Okay, call this whenever you want to delete.” Telling me outside of the program, like you and me, you’re the developer, I’m the developer, you tell me with words to my human brain to do this instead of telling me through the code. Because then, I have to write code that knows how to do something with.

I guess it just really depends, like how much of your application really is common patterns to respond to unknown sets of things like, “We don’t know what the URL is going to be but every time you get this kind of resource, you know you’re going to do this with it.” Those such a situation, it’s like, “Oh yeah, Hypermedia,” but in lots of other situations, it just feels like, “Just hard code the endpoint to see that, or not the entire endpoint but just the path.”

Chris: I should point out that one of the biggest reasons for wanting to use Hypermedia API is it does give the possibility of launching new features, increasing the functionality of your API without any client change. That’s the Holy Grail here. I think in certain situations this makes sense to try to go do this.

If you have a pretty simple application but you know you want to be able to extend to it and maybe add, it could be the crud on just new things. It’s like you go from maybe orders to then also include shipments, sales, whatever it may be, other things. Maybe something like that lends itself well to something like this where you can come and keep it pretty discrete and the semantics of it, but you want to continue expanding the coverage of the types of resources that involves.

You don’t have to make any updates to your clients. You can have a mobile app and you don’t have to push any new updates to the store to be able to consume the new API because it’s all data-driven. That’s pretty attractive but again, you have to balance that out with like, “Okay, can I really cover the semantics of this adequately in that client to take advantage of that condition?”

Jon: Yeah. This is just such an important thing, making this decision whether to use Hypermedia or not because it doesn’t pack the learning on the team, time to build stuff. I just want to point out that if your building something like an order system or a sales system, it’s more likely that the reactions to the various different types of resources in that system or objects in that system are going to be very specific to that thing you’re working on. So you handle an order entirely differently than you might handle an invoice.

Therefore, that kind of system might lend itself less to Hypermedia than for example a social media system where, “Hey, there might be posts. I can share a map, I can share a video, I can share a photo, I can share a thought about myself or a little friendly widget that I draw on my phone. All different kinds of things I can share,” and they’re all different objects, maybe in different tables in a database or something for whatever reason, so they have maybe different endpoints.

But at the end of the day, the application might be able to use a similar mechanism for displaying each one. “Let me just get the stuff that I’m supposed to show to the user.” In that case, it could be really handy to do that because then I could just throw a new Hypermedia API at it and I just have the ability to add a VR video to my feed without changing the client.

Chris: Yeah, you bring up a good point. It doesn’t have to be all-or-nothing. Your entire API doesn’t have to be Hypermedia API. You can just use it for some parts of it that make a lot of sense. All that said, Hypermedia APIs are interesting but also the real-world practicality of implementing them and consuming them, not straightforward. Given that, I’ve done many APIs. I’ve been putting in there many APIs and of all those, really one was a Hypermedia API. So, take that for what it’s worth. It is a limited use case for it but it’s something that’s good to know about and to be able to consider.

Jon: We’re truly out of time but then we’re going to talk about tools for developing your API? Maybe we can just quickly say Apiary’s pretty cool. Lets you do stuff, lets you define the API, lets you even abuse it a little bit just in a mock kind of way, and Swagger is the language for defining the API that you can use inside of Apiary or in other tools as well. I think that you’d want to quickly add to that?

Chris: Again, if you are building APIs, by all means, please do yourself a favor, go and use one of these tools. There’s so many of them out there. They’re all really good. Apiary, Swagger, Kong, they all allow you to basically define your API, provides nice documentation for it, they provide mocks for you so you can be up and running. Your clients can actually hit the API and get back responses without writing a single line of code. It’s really a great way to just get going and to really focus on the actual development, the spec, the defining of the API. So, absolutely, definitely go use one of these tools when you go build APIs.

Jon: Very cool. Great conversation and thanks for putting this together, Rich, if you’re still there.

Rich: Yeah. Sorry it’s on mute. Rich is asleep. APIs… APIs…

Jon: Rich’s company, Secret Stache, has been developing an API for us to use for our clients, construction-instruction services. Mostly, Rich was not asleep and was getting a little something out of this himself.

Rich: I was.

Chris: Awesome.

Jon: All right, thanks so much, Chris and Rich. Talk to you guys next week.

Chris: All right. Thanks guys. See you. Bye.

Jon: Bye.

Rich: Later.

Well dear listener, you made it to the end. We appreciate your time and invite you to continue the conversation with us online. This episode, along with show notes and other valuable resources is available at mobycast.fm/49. If you have any questions or additional insights, we encourage you to leave us a comment there. Thank you and we’ll see you again next week.

Show Buttons
Hide Buttons
>