Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Classic "confused deputy" problem. What is the current recommendation in the modern microservices world to solve it?

When user agent (UA) makes authenticated call to service A, which in turn makes call to service B:

UA -[user auth]-> A -[????]-> B

how to pass authentication information from A, when making a call to B? Options I can think of:

- pass UA token as is. This has a problem that token becomes too powerful and can be made to call any service.

- pass own token and pass user auth info as an additional field. This doesn't solve confused deputy problem, since own token can be used with any user auth and service B can be tricked to make request for data in B not belonging to user

- Mint new unique token derived from tuple (A own token, UA token, B service name). B then extracts user information from the token presented by A and authorizes request. This seems to solve confused deputy problem, because A has no access to other UA tokens, so it can't mint a new token for a wrong user. Downside is that token minting should probably be done in another service and it requires making a call to it for almost every request between two microservices, making it a bottleneck pretty quickly.

I've never seen last one in real life, maybe it has some critical flaws I am failing to see?



Capabilities systems are designed specifically for this purpose. In such a system, a capability specifically for the user's right to access A and B is exposed as handle / token, and services A and B can't access anything without first being given an exposed capability handle. Notably, capabilities can be constrained so that it's not keys to the kingdom.


Are there opensource projects can be used to build such system?


There are a few but Cap'n Proto is probably the most mature at this point: https://capnproto.org


Spotify uses per user encryption, which is an approach that can solve this: https://engineering.atspotify.com/2018/09/18/scalable-user-p...

That way account A couldn’t access account B’s decryption key to get to their private video data


That's really interesting. Another reason for greater use of user-level encryption.


This isn't a confused deputy problem. There's simply no authentication on the endpoint. As the article says, it's Insecure Direct Object Reference.


I think it depends how you look at it.

> A confused deputy is a legitimate, more privileged computer program that is tricked by another program into misusing its authority on the system.[0]

In this case, we know we don't have access to open the safe, but we were able to convince the deputy, who does have access to the safe, to give us what's inside one small piece at a time. The deputy didn't intend to empty the safe- he was only showing little bits of what's inside! That's all he's allowed to do with his access!

I see where the OP is coming from in calling it that.

[0]https://en.wikipedia.org/wiki/Confused_deputy_problem


There's a sense in which it is also a confused deputy problem though. You don't have authority to the object (the video) yourself, but the Ads service does, and you can convince the Ads service to use its authority to reveal information on the video that you don't otherwise have access to.


I guess. If the ad service let you create an ad consisting of another video, and you could embed a private video, that'd be pretty clear.

This reminds me more of the Facebook bug where www.facebook.com had access control but mobile.facebook.com didn't. I don't really consider every endpoint to be a "deputy".


I believe Macaroons[1] attempt to solve this problem.

[1] https://research.google/pubs/pub41892/


Your third solution basically reinvents Kerberos. I don't think Kerberos envisioned services making calls to each other though. In the 1980s, I think it was assumed that the client would contact each service separately and combine the results itself.


AFAIK keycloack by RedHat which is auth as service, passes token as is.

Not sure what you mean by "token becomes too powerful and can be made to call any service." Each sub-service can have in token what is required to access it, and that can be managed by main frontend service.

There is a limit to token size but you can easily optimize claims and stuff to not go overboard in majority of cases.


> token becomes too powerful and can be made to call any service.

If UA token is passed as is down the chain of microservices, then every service starts to accept it. Intercepting this single token allows attacked to craws whole internal system. It wont grant access to other users data, but nevertheless it doesn't seem like a secure solution to me.

> Each sub-service can have in token what is required to access it, and that can be managed by main frontend service.

This would require UA token to contain audience claim of every single internal service, this is unlikely to pass security review.


> Intercepting this single token allows attacked to craws whole internal system.

It can intercept it, but can not change it. It can replay it eventually (even that shortly, depending on timeframe of your access token which is usually minutes) but you can protect against it.

> This would require UA token to contain audience claim of every single internal service, this is unlikely to pass security review.

I have penetration tests on my main service. Sub-services are not accessible and can be secured to desired level on the internal network. I never had security inspections on internal services (I work on highly critical gov systems). Maybe in some domains its like you say but I believe its generally not a problem. Furthermore, we need to have some perspective on this - there are multiple easier ways to hack a service and there probably exists big number of other exploits that are easier to achieve.


If the token having claims is a security issue, the entry point could swap the users token (containing just their unique id and an expiration) with an authorized token with claims, and keep that token within the local network. Then there’s a single token broker layer and claims are secure. I’m not sure why claims would be an issue to have in the original token though, could you provide some more info on that?


Your first option has an additional threat vector - the UA token is repayable against the first service. In case of compromise, as you say, the toke is too powerful and can do too much.

The second option is indeed bad.

The third option is used heavily in production for both cost savings and latency reduction.

There's a 4th option which is to go back to your auth server with the UA token and get a new one representing all the data you discuss in your tuple, but still signed and valid representing (a, user, b). This is the on behalf of flow, and is standardized in oauth2 under the name Token Exchange, roughly.

(E - actually, my 4th and your 3rd are the same. My 3rd is an improvement to not require minting a new token)


Give user agent two tokens: one for A and one for B (let's call it UB). Pass UA and UB to A. A passes its own token to B plus the UB token. B uses user info from UB and roles from both UB and A's token.

UB has a list of allowed intermediates (in this case, A) so user agent doesn't send it to every service.

In my implementation there were various kinds of tokens, so UB couldn't be used by itself to invoke B directly.

For our situation all this complexity turned out to be not worth it. :-/


If you use a service-mesh (such as Istio), you can have all inter-microservice communications be over mutual TLS. Assuming you only expose an API gateway to the outside world, have the gateway handle authentication, then each service can handle any feature-level authorization with that user info.

Bonus: When using a mesh service like this, you can also ban/rate-limit/load-balance/canary calls between any two microservices if necessary.


The idea that client A has its identity authenticated by service B, and that service B checks that client A is authorized to access some endpoint, does not solve the problem of B accessing content on behalf of A that user U should not get to see.

The way Google does mutual authentication between services (which, I reiterate, does not address this problem) is described in great detail at https://cloud.google.com/security/encryption-in-transit/appl...


Hash token from A with shared secret that A and B both know, but UA does not, then pass both the token and the hash?


I like it. One simplification might be just to pass 2 tokens: UA as is and A own token.

Service B then uses A token for authentication, but UA token for authorization.


You could also have A just sign the token for the same effect.


> I've never seen last one in real life, maybe it has some critical flaws I am failing to see?

Doesn't Kerberos solve this with s4u2self and s4u2proxy and other delegated credentials?

I'll admit it isn't quite the exact same, but the general idea is the same.


If the UA token has all the necessary permissions embedded in it, then it cannot be used to call any service for which the user is not authorized.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: