NIST_SP_cloud-API
NIST_SP_cloud-API
Ramaswamy Chandramouli
Zack Butcher
Ramaswamy Chandramouli
Computer Security Division
Information Technology Laboratory
Zack Butcher
Tetrate, Inc.
March 2025
Certain commercial equipment, instruments, software, or materials, commercial or non-commercial, are identified
in this paper in order to specify the experimental procedure adequately. Such identification does not imply
recommendation or endorsement of any product or service by NIST, nor does it imply that the materials or
equipment identified are necessarily the best available for the purpose.
There may be references in this publication to other publications currently under development by NIST in
accordance with its assigned statutory responsibilities. The information in this publication, including concepts and
methodologies, may be used by federal agencies even before the completion of such companion publications.
Thus, until each publication is completed, current requirements, guidelines, and procedures, where they exist,
remain operative. For planning and transition purposes, federal agencies may wish to closely follow the
development of these new publications by NIST.
Organizations are encouraged to review all draft publications during public comment periods and provide feedback
to NIST. Many NIST cybersecurity publications, other than the ones noted above, are available at
https://csrc.nist.gov/publications.
Authority
This publication has been developed by NIST in accordance with its statutory responsibilities under the Federal
Information Security Modernization Act (FISMA) of 2014, 44 U.S.C. § 3551 et seq., Public Law (P.L.) 113-283. NIST is
responsible for developing information security standards and guidelines, including minimum requirements for
federal information systems, but such standards and guidelines shall not apply to national security systems
without the express approval of appropriate federal officials exercising policy authority over such systems. This
guideline is consistent with the requirements of the Office of Management and Budget (OMB) Circular A-130.
Nothing in this publication should be taken to contradict the standards and guidelines made mandatory and
binding on federal agencies by the Secretary of Commerce under statutory authority. Nor should these guidelines
be interpreted as altering or superseding the existing authorities of the Secretary of Commerce, Director of the
OMB, or any other federal official. This publication may be used by nongovernmental organizations on a voluntary
basis and is not subject to copyright in the United States. Attribution would, however, be appreciated by NIST.
Publication History
Approved by the NIST Editorial Review Board on YYYY-MM-DD [Will be added to final publication.]
Submit Comments
sp800-228-comments@nist.gov
Additional Information
Additional information about this publication is available at https://csrc.nist.gov/pubs/sp/800/228/ipd, including
related content, potential updates, and document history.
All comments are subject to release under the Freedom of Information Act (FOIA).
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1 Abstract
2 Modern enterprise IT systems rely on a family of application programming interfaces (APIs) for
3 integration to support organizational business processes. Hence, a secure deployment of APIs is
4 critical for overall enterprise security. This, in turn, requires the identification of risk factors or
5 vulnerabilities in various phases of the API life cycle and the development of controls or
6 protection measures. This document addresses the following aspects of achieving that goal: (a)
7 the identification and analysis of risk factors or vulnerabilities during various activities of API
8 development and runtime, (b) recommended basic and advanced controls and protection
9 measures during pre-runtime and runtime stages of APIs, and (c) an analysis of the advantages
10 and disadvantages of various implementation options for those controls to enable security
11 practitioners to adopt an incremental, risk-based approach to securing their APIs.
12 Keywords
13 API; API endpoint; API gateway; API key; API schema; web application firewall.
i
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
ii
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
53 Table of Contents
54 Executive Summary............................................................................................................................1
55 1. Introduction ...................................................................................................................................2
56
57
58
59
60
61
62 2. API Risks — Vulnerabilities and Exploits ..........................................................................................8
63
64
65
66
67 2.4.1. Unrestricted Compute Resource Consumption ......................................................................... 10
68 2.4.2. Unrestricted Physical Resource Consumption ........................................................................... 10
69
70
71 2.6.1. Input Validation .......................................................................................................................... 12
72 2.6.2. Malicious Input Protection ......................................................................................................... 12
73
74 2.7.1. Gateways Straddle Boundaries .................................................................................................. 13
75 2.7.2. Requests With a Service Identity but No User Identity .............................................................. 13
76 2.7.3. Requests With a User Identity But No Service Identity.............................................................. 14
77 2.7.4. Requests With Both User and Service Identities ........................................................................ 15
78 2.7.5. Reaching Out to Other Systems ................................................................................................. 16
79 2.7.6. Mitigating the Confused Deputy ................................................................................................ 16
80 2.7.7. Identity Canonicalization ............................................................................................................ 16
81 3. Recommended Controls for APIs...................................................................................................18
82
83 3.1.1. Basic Pre-Runtime Protections ................................................................................................... 19
84 3.1.2. Advanced Pre-Runtime Protections ........................................................................................... 20
85
86 3.2.1. Basic Runtime Protections.......................................................................................................... 21
iii
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
iv
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
108 Acknowledgments
109 The authors would like to thank Orion Letizi, technical writer at Tetrate for providing
110 continuous, ongoing edits during the development of this document. We would also like to
111 thank Erica Hughberg, an engineer at Tetrate and James Gough, a Distinguished Engineer at
112 Morgan Stanley for their feedback on the initial outline for controls. Their extensive hands-on
113 experience in running API security programs in large enterprises helped us to address the
114 current API security issues and incorporate state of practice API security controls in our
115 recommendations. Last but not the least, the authors would also like to express their thanks to
116 Isabel Van Wyk of NIST for her detailed and extensive editorial review.
v
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
139 1. Introduction
140 Application programming interfaces (APIs) represent an abstraction of the underlying
141 implementation of a digital enterprise. Given the spatial (e.g., on-premises, multiple clouds)
142 and logical (e.g., microservices) nature of current enterprise applications, APIs are needed to
143 integrate and establish communication pathways between internal and third-party services and
144 applications. Informally, APIs are the lingua franca of modern IT systems: they describe what
145 actions users are allowed to take. They are also used in every type of application, including
146 server-based monolithic, microservices-based, browser-based client, and IoT.
164
165 Fig. 1. API, API Endpoint, Service and Service Instance
2
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
181
182 Fig. 2. (Top to Bottom) Service API, Façade API, Service and Application (Monolithic)
3
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
183
184 Less formally: we can think of the APIs we expose outside the organization as a facade over a
185 set of Services. Those Services implement internal APIs (Service APIs). Services in the
186 organization communicate with each other via those internal APIs – sometimes directly, and
187 sometimes via an API gateway. The API Gateway is responsible for some policies, like
188 authentication and rate limiting, as well as being responsible for mapping the facade APIs for
189 external clients to internal APIs. Then, to get a handle on things organizationally, we often
190 group related Services into a bucket called an Application.
191 While we tend to think of APIs in the context of exposing functionality to clients or partners,
192 APIs don’t exist solely at the edge of our infrastructure. Any time systems communicate, there’s
193 some API involved. Even if that API is something like CSV over FTP. The examples in SP focus
194 primarily on “modern” APIs exposed via mechanisms like HTTP/REST, gRPC, or SOAP, but we
195 believe the principals in this SP are universal and should be applied to all APIs.
4
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
220 in a system, including those exposed to the outside world (i.e., public APIs) and those intended
221 only for other applications in a given infrastructure (i.e., internal APIs).
232
233 Fig. 3. DevSecOps life cycle phases
5
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
241 Each of these two categories is further divided into two subcategories based on organizational
242 maturity (i.e., basic and advanced), which enables enterprises to adopt them based on an
243 incremental risk-based approach.
244 A prerequisite for defining any API protection measure or policy irrespective of its category or
245 sub-category is that the protections must be expressed in terms of nouns and verbs that pertain
246 to API components, API endpoint components, API requests, and API responses that in turn
247 contain references to resources/data and operations on those resources. These nouns and
248 verbs form the fundamental surface that is exposed to the consumers of APIs and API
249 endpoints.
6
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
276 • Appendix B illustrates the API controls related to each DevSecOps phase
7
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
8
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
310 In line with identity-based segmentation, every service for an API endpoint should perform two
311 levels of authorization: 1) service authorization and 2) end-user-to-resource authorization [6].
312 However, implementing both levels of authorization can still leave many APIs open to risk.
313 Individual fields of a resource often need to be authorized independently of the resource itself.
314 For example, if additional debug information is embedded in an “internal” field of the API
315 object, that field should not be visible to “external” callers (i.e., callers not authorized to see
316 privileged debug information).
317 Authorization risks can be categorized in three ways:
318 1. Missing authorization: There is no fine-grained, resource-level authorization present.
319 For example, a legacy system may be operating under different access models (e.g., in a
320 perimeter-based model, access is authorization), or there may be implementation bugs
321 (i.e., an access check that should be enforced is not).
322 2. Incorrect authorization: The application performs an end-user-to-resource authorization
323 check but fails because it checks any or all of the following: the wrong end user identity,
324 the wrong permission, or the wrong target resource.
325 3. Insufficient authorization: The application performs a resource-level authorization that
326 is successful, but the resource itself contains information that is “privileged” or not
327 intended for the level of access implied by access to the resource itself. This is often the
328 root cause for the risk of leaking sensitive information (see Sec. 2.5).
9
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
346 • Weak or predictable tokens, default accounts, and default passwords (e.g., a hard-coded
347 bootstrap account with the same username and password on all devices, test accounts
348 with predictable names and weak/guessable passwords).
10
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
380 These risks are best mitigated by a combination of rate limiting, quotas, spending policy
381 controls in third-party software, bot/abuse detection, and application or business flow changes.
382 These risks manifest as:
383 • Impacts on business operations (e.g., damage to equipment and personnel, the creation
384 of fake orders that require human effort to sort and remove
385 • Impacts on customer relationships (e.g., scalpers automatically buying inventory to re-
386 list at a higher price elsewhere)
387 • Infrastructure co-opted for abuse or harassment (e.g., multi-factor authentication
388 fatigue attacks, where an attacker triggers text spam to a user’s phone via an SMS 2-
389 factor authentication system [9])
390 • Unplanned expenses (e.g., consuming far more of a third-party service than planned due
391 to satisfying requests made by a malicious user)
392 Mitigations for both compute and physical resource consumption are similar. For compute
393 resources, how users interact with a system should be limited. For physical resources, how the
394 user interacts with a system and how a system interacts with external systems should be
395 limited and considered early in the design phase. Mitigating these risks can sometimes require
396 business flow changes.
11
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
12
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
13
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
485 service in the user identity domain and attach that service account credential as the end user
486 credential to requests that it forwards into the part of the infrastructure that supports identity-
487 based segmentation.
488 Applications that perform identity-based segmentation will need to configure policy for that
489 service account user so that it can act on all of the data that the batch job previously used its
490 service identity to access. At the same time, the application can remove any support for special
491 data access without an end user credential. Finally, the existing infrastructure can be leveraged
492 to audit and manage both user and service access to data.
493 An implication of this is that all applications attempting to implement identity-based
494 segmentation without a user identity should adopt service accounts by changing their
495 application code. This will simplify future migration into the identity segmentation domain and
496 make the system more secure overall.
14
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
511
512 Fig. 4. Handling API Calls with User Identity & No Service Identity
513 However, the gateway’s service identity is already in place between the gateway and the first
514 service performing identity-based segmentation. For that first hop, three identities need to be
515 handled on the request: the gateway’s service identity, the service identity of the external
516 service, and the end user’s identity. As before, external service authorization can be performed
517 via the gateway and simply drop the external service identity. Services should support
518 validating both the end user and a workload identity via metadata from the request in addition
519 to validating workload identity via the transport (e.g., mTLS certificates).
520 For example, suppose that an organization A) uses a SPIFFE X.509 identity via mutual TLS for
521 service identity as a service mesh does, B) uses a JWT bearer token for user identity, and C)
522 chooses to represent external service identity as a JWT token attached to the request. The
523 mesh can then enforce that the gateway forward traffic to the service via (A), authenticate the
524 service JWT and authorize the external service (C), and authenticate the end user (B) before
525 forwarding a request to the application. This would fully support authenticating and authorizing
526 all of the communicating parties, and the service in question would not need to be aware of the
527 external service identity or credential. They would simply need to manage a policy of “allowed
528 external service callers” alongside their set of “allowed internal service callers.”
15
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
537 the identity-based segmentation portion of the infrastructure (e.g., a JWT bearer token), and
538 the external service’s identity should be represented to the internal system as a token so that
539 the policy can be enforced on all three identities in the first hop.
16
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
574 allows for concise and consistent sets of policy that govern access to other services and user
575 access to data. Having both policies in place implements identity-based segmentation and
576 dramatically improves security posture.
577
578 Fig. 5. Identity Canonicalization for Handling API calls
579 For most organizations, implementing credential canonicalization will require either adopting
580 an identity provider wholesale and standardizing on that throughout (including working out
581 legacy integration so that legacy credentials can be used to get credentials via the new
582 provider) or performing identity exchanges, as described in this section. The API gateway is
583 ideally situated to enforce either choice. Performing identity exchanges also requires a mapping
584 of identities across domains as well as a “token server,” which uses that mapping to mint
585 credentials.
586
17
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
592 In their earliest form, controls for APIs focused primarily on encryption in transit while
593 delegating most other concerns to the application. Over time, a variety of challenges have
594 emerged that necessitate the evolution of controls, including:
595 • The distributed nature of modern enterprise applications, which span multiple on-
596 premises and cloud environments and communicate over the network using APIs
597 • The requirement to build robust systems that work around transient failures and handle
598 large volumes of traffic
599 • An increasingly complicated API surface driven by business needs to integrate more
600 deeply with partners and expose richer functionality to users
601 • Increasingly sophisticated attackers who have moved up the stack from low-level
602 exploits and DoS attacks to application-level attacks that leverage the APIs that systems
603 use to function
604 Controls for APIs should cover all of the APIs in the organization, including those exposed to end
605 users, those exposed to partners, and those that are only intended for “internal” consumption.
606 This document’s controls are structured into two primary sections based on the iterative API
607 life cycle discussed in Sec. 1:
608 1. Pre-runtime protections, which should be applied during design, development, and
609 testing. These include:
610 a. Creating a well-defined specification for the API’s contract using some interface
611 definition language (IDL) (e.g., OpenAPI, gRPC, Thrift)
612 b. Defining request and response schemas as part of that API specification
613 c. Defining valid ranges of values for fields of each request and response
614 d. Tagging the semantic type of each field of each request and response
615 e. Creating and maintaining an inventory of these API specifications across the
616 organization, including ownership information
617 2. Runtime protections, which should be applied to each request and response to the API
618 at runtime. These include:
619 a. Encryption in transit
620 b. End-user authentication and authorization
621 c. Service-to-service authentication and authorization
18
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
19
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
658 information makes integration easier and less error-prone for clients and presents the
659 opportunity for automated enforcement, such as the maximum latency (e.g., “the server will
660 drop requests that take longer than 5 seconds to process”) and rate limits (e.g., “by default, 5
661 calls per minute are allowed”).
662 REC-API-4: Organizational API inventory of all internal and external APIs should be maintained.
663 This is in line with the Identify directive of the CSF [12]. That inventory should include:
664 • Each API’s specification, though the inventory does not need to be the API
665 documentation
666 • Ownership information about the API to simplify the translation of runtime problems to
667 organizational response
668 • Runtime information to enable operations and security teams to understand the impact
669 of each API (e.g., service instances, instance IP addresses, runtime service ID, traffic
670 volume, rate of requests and errors, the status of policy enforcement)
20
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
696 to facilitate easy audit and ensure continuous enforcement. Once the annotations are
697 present, a variety of runtime implementations are possible.
698 REC-API-7: Annotate each field with its semantic type to indicate fields that contain sensitive
699 information, such as personally identifying information (PII), protected health information
700 (PHI), or payment card information (PCI). This enables runtime systems to track data flow
701 through the system, trigger alerting, and apply cross-cutting policy to ensure data does not leak
702 across inappropriate boundaries.
703 REC-API-8: Include runtime information in the API inventory with ownership (REC-API-4). This
704 becomes substantially more valuable when annotated with runtime information (e.g., service
705 instances and their IP addresses, runtime identities of the service instances, metrics or health
706 information for the service, runtime metrics for traffic between services). This information can
707 help security identify the blast radius of an event, operations to identify problems and root
708 causes, and application teams to understand their application’s behavior. Correlating this
709 information with the APIs being served makes it simple to link clients to servers as the problem
710 is traced back to its root.
21
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
733 may contain information (e.g., the device being used to access the system) in addition to a
734 token from the software itself (e.g., an API key).
735 • REC-API-11-1: Identities must be cryptographically verifiable and should not use weak
736 signing algorithms (e.g., no JWTs with “alg: none,” weak algorithms, or short key-
737 lengths). SP 800-57 [16] discusses the strengths of cryptographic algorithms and the
738 necessary key lengths for each.
739 • REC-API-11-2: Authentication should use standard mechanisms whenever possible. For
740 example, end user authentication should use a mechanism such as OpenID Connect
741 (OIDC), OAuth2, or SAML. Services should use a mechanism like SPIFFE SVIDs, JSON Web
742 Tokens (JWTs), API keys, or similar.
743 • REC-API-11-3: Tokens must support expiry so that credentials are cycled regularly.
744 Checking for expiry must be an inherent part of token validation. For example, when
745 processing JWTs, the “exp” claim RFC 7519 [18] must be checked. Similarly, when
746 processing an X.509 SVID, check the validity period’s “Not Before” and “Not After” [19].
747 • REC-API-11-4: Return opaque tokens to untrusted systems. It is common for credential
748 tokens to encode information about the internals of the system (e.g., minting a JWT to
749 represent a user in the infrastructure that includes claims that represent the user’s
750 capabilities in the system). This is a common scheme to simply and reliably enforce
751 authorization per hop: validate the JWT, and check whether it contains the “claim” that
752 represents the permission for an API endpoint. These claims encode all local operations
753 that can be performed with data from the request and the local application.
754 Returning a token with these details to an external user may risk leaking information
755 about the internals of the system. This is where the following issues become critical to
756 the safety of the API: how permissions are modeled, the set of internal
757 permissions/claims that map to a given external API endpoint, and information about
758 the path that the request traverses through the infrastructure.
759 REC-API-12: Authorize the calling user and service for each identity on the request, including
760 whether the calling software system is allowed to access the API endpoint and whether the end
761 user is authorized to take the action on the resource represented by the endpoint. See SP 800-
762 207A [6], controls ID-SEG-REC2 and ID-SEG-REC4.
763 Getting these authorization checks correct is one of the most common mistakes in API security
764 [7]. REC-API-6 discusses annotating each request or endpoint with the permission required by
765 the end user to call that endpoint on a resource. With annotations like those in place, runtime
766 tooling can be implemented to ensure that those annotations are transformed into runtime
767 permission checks against the authorization system. Combined with a robust DevOps process to
768 ensure that annotations are present on APIs before they can be deployed, there can be a high
769 degree of assurance that the correct authorization is being performed at the platform level. The
770 idea of using the service mesh to achieve this is discussed in SP 800-204B [3].
771 REC-API-13: Validate each request and response per the API schema before it is processed by
772 the business logic (e.g., ensure that the request has a “name” field that is a string and no other
22
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
773 fields). This ensures that applications only receive well-formed input and minimizes a class of
774 errors and data leaks due to validation inline in the business logic. Additionally, validate that
775 each response from the server conforms to the expected response schema to help prevent a
776 variety of data leaks, abuses, or mistakes.
777 REC-API-14: Authenticate, authorize, then validate in that order to minimize the risk of leaking
778 data to attackers, since validation messages are at especially high risk of leaking information.
779 For example, rejecting a request with a validation error for using a duplicate user-supplied
780 name as another user may unintentionally leak information to callers regarding the existence of
781 a resource. A likely mitigation may be an underlying per-user segregation of user-provided data,
782 which often requires business logic changes in the application. Generic validations (REC-API-10)
783 are exceptions to this because they are not business logic-aware and do not risk leaking
784 information. They can be safely implemented by the platform ahead of authentication, which is
785 often desirable to help protect the authentication and authorization systems from DoS and
786 other attacks.
787 REC-API-15: Enforce limits on API and resource usage. API gateway teams should provide
788 reasonable defaults for the organization, and application teams should be able to enforce their
789 own, more fine-grained limits in their application or leveraging the platform. Those limits
790 should include:
791 • REC-API-15-1: Rate limit all API access for all callers to ensure fair utilization across
792 users, help with capacity planning, and mitigate the risk of unrestricted resource
793 consumption. See REC-API-16 for recommendations on specific rate-limiting
794 implementations.
795 • REC-API-15-2: Apply timeouts to all requests, including the API gateway. This should be
796 done at the TCP level, where connections are automatically timed out after a modest
797 time (e.g., 5 minutes) rather than the kernel’s default of more than one hour per
798 connection. Timeouts should also be configured at the application level. If a required
799 operation should complete in five seconds as part of the API contract, set a 6-second
800 timeout for it. This ensures that the resources in a service do not wait for a response
801 that will never arrive.
802 • REC-API-15-3: Apply bandwidth and payload limits to enforce maximum request and
803 response sizes. The “correct” limit is highly contextual and based on the organization
804 and application (e.g., a bank will have very different expectations than a video streaming
805 company). This helps avoid a variety of risks related to malicious input and DoS.
806 • REC-API-15-4: Validate and limit user-supplied query parameters (e.g., amount of
807 processing done, size of their response based on user input), especially in the context of
808 what the system can support and what is typical for users of the system. For example:
809 o The number of elements returned per page of a paginated list API. If a typical
810 user has 100 items, cap the maximum number of elements per page to 1000.
23
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
811 o Time ranges in dynamic queries. If a system is intended for viewing recent
812 events, and the user can provide a time range, limit that range to the last 30 days
813 rather than allowing the user to query “from 1972 onward.”
814 o GraphQL and similar API facade systems that support query languages over many
815 APIs should have limits on the queries that users can execute (i.e., approved or
816 predefined queries only) and caps on the number of outbound calls allowed in
817 the execution of a single query.
818 REC-API-16: Rate limiting recommendations are one of the most effective tools to mitigate
819 unrestricted resource consumption and can increase the challenge and discoverability of many
820 attacks with a goal of leaking sensitive information via data exfiltration from API calls (e.g.,
821 scraping all chat logs from an organization with a script impersonating a chat client). Most
822 organizations apply some type of rate limit to “external” traffic, but it is equally important to
823 rate-limit internal callers. It is very easy to unintentionally cause a DoS on an internal system
824 with poorly conceived code. It is equally critical to consider the limits placed on internal
825 software that call out to external systems (see Sec. 2.6.2).
826 The following recommendations on rate-limiting configuration address common pitfalls and
827 misunderstandings:
828 • REC-API-16-1: Rate limits are not quotas. A quota is a usage limit on an API over an
829 extended duration (e.g., per month) that is associated with a user’s payment or billing
830 structure. Many organizations have “API usage tiers” that map prices to higher per-
831 month limits. These quotas need to be strictly enforced and are typically used to
832 generate billing reports that are sent to customers. In contrast, rate limits are intended
833 to protect the system from overuse and help ensure fair usage across separate,
834 concurrent callers. Rate limits do not need to be exact in the way that quotas must be.
835 • REC-API-16-2: Rate limits for total load provide little benefit and should be dimensioned
836 by user (e.g., 83 requests per 5 minutes per user) using the source IP address or end-
837 user credential as the key. Rate limits without a user dimension (e.g., service can receive
838 1,000 requests per 5 minutes total) are not particularly effective and allow some users
839 to impact others (e.g., DoS risk). This is true even when total limits are dimensioned by
840 service instance (e.g., a single instance cannot receive more than 100 requests per 5
841 minutes). Circuit breaking functions must be used to provide protective limits on
842 concurrency for a service instance. More information on circuit breaking and other
843 resiliency and load-shedding techniques can be found in 800-204A, Sec. 2.3 [2].
844 • REC-API-16-3: Rate limits should be short in duration (e.g., per 60-seconds, per 5-
845 minutes). A rate limit is defined as the number of calls allowed over a time period (e.g.,
846 24,000 requests per 24 hours; 1,000 requests per hour; 16.5 requests per minute). Most
847 systems allow for the configuration of both the number of calls and the amount of time
848 over which they are allowed.
849 However, there are two problems with per 24-hour rate limits. First, they cause outages
850 for callers that resolve themselves when the rate limit server resets for the next 24-hour
24
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
851 period, even if the rate limit was originally set correctly based on the client’s expected
852 usage. A successful API ecosystem will see the increased usage of APIs over time, which
853 results in the increased usage of their dependencies and those APIs. This is the typical
854 organic growth of API usage. Adjusting rate limits before they caused outages is almost
855 never a priority for application teams, so over time, clients may see the API begin to
856 randomly fail with 400 errors. Second, per 24-hour rate limits can result in spiky traffic
857 for the service, where a client consumes the entire 24-hour limit over a very short time
858 and causes a heavy load on the services.
859 Shorter time limits allow clients to experience a few intermittent failures every minute
860 or five as their traffic grows organically rather than total failure with per 24-hours.
861 Additionally, the system will experience smoother traffic overall because a single client
862 must pace their consumption over a longer duration, resulting in less load from each
863 client at any given time.
864 REC-API-17: Fine-grained request and user blocking allows the API serving stack to block
865 individual users via their end-user credential and/or network address. This is a key capability in
866 enabling an effective response in the face of an ongoing incident (see the Respond function in
867 the CSF [12]). The actual enforcement can be handled by separate components (e.g., network-
868 level blocking implemented by a firewall or the load balancer; credential-level blocking
869 implemented by the API gateway, bot/abuse detection systems, or the authorization system).
870 For relevant information on these techniques, refer to SP 800-53, AC-3 [15] and SP 800-204B,
871 Sec. 4.6 [3].
872 REC-API-18: API access must be monitored to ensure that the API serving stack provides
873 sufficient telemetry to assess the availability of APIs and to ensure that policies are being
874 enforced. The traditional triad of logging, metrics, and distributed traces is recommended. All
875 three should be tagged with information about the API being accessed in addition to the
876 runtime service so that service calls can be traced back to APIs.
877 For the API gateway itself, a range of signals should be produced to enable the identification of:
878 • Basic communication information, like the information included in the Common Log
879 Format [20] (e.g., who called, what method, from what origin)
880 • Health (e.g., rate of requests, rate of errors, latency) per API and API Endpoint
881 • Enforcement results per policy class (e.g., requests allowed or denied due to missing or
882 incorrect authentication or authorization checks, requests blocked due to rate limiting)
883 to assess the aggregate enforcement of each policy
884 • The health of the services behind the API gateway
885 General information on audit and logging requirements can be found in SP 800-53 [15], AU-2
886 Event Logging, AU-3 Content of Audit Records, and AU-12 Audit Record Generation.
887 Information on service mesh telemetry, which can be used for audit and logging, can be found
888 in SP 800-204A [2], SM-DR21 through SM-DR24.
25
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1 Other patterns have a wider perimeter and are susceptible to the API gateway being bypassed. Therefore, they do not satisfy ID-SEG-REC-4.
26
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
927 of assurance into REC-API-19.2 and REC-API-19.3 shifts the focus to mitigating the risk of
928 leaking sensitive information.
929 • REC-API-19.2: Field-level visibility as a cross-cutting policy can leverage basic “Public”
930 and “Private” annotations on each field. The authorization check effectively asks
931 whether data should be visible to “external” callers. 2 These coarse-grained
932 Public/Private annotations are particularly effective on common types shared across
933 many APIs in the organization. For example, a standard error reporting pattern used by
934 all APIs can leverage field-level annotations to differentiate “user” facing errors versus
935 “developer” facing errors, mitigating the risk of leaking sensitive information via errors.
936 The gRPC Status proto [21] is an example of a consistent error reporting pattern. In the
937 gRPC case, field-level annotations would reside in the message used for the status’s
938 “details.”
939 • REC-API-19.3: Field-level authorization as a cross-cutting policy can be leveraged to
940 perform fine-grained field-level authorization (REC-API-6.1). This extends the idea of
941 REC-API-19.1 down to the level of each individual field of the response and allows for
942 the filtering of API objects per-use to implement sophisticated access control schemes.
943 While this kind of approach offers a very high level of data security, it causes a sharp
944 increase in the number of policy checks that the authorization system must perform and
945 requires active participation by application developers to keep permissions per field up
946 to date as the application evolves. For example, a resource-level authorization check
947 requires one authorization decision per request. A field-level authorization check
948 requires one authorization decision for the request plus an additional decision for each
949 field of the response. Even an object with a modest number of fields (e.g., 5) results in
950 whole-number multiples more policy decisions made by the authorization system. For
951 developers, the purpose and therefore permission of an endpoint rarely changes, but
952 the fields of the request and response objects for that endpoint regularly evolve over
953 time. This makes upkeep for permissions at the field level more expensive for
954 application developers versus endpoint-level annotations (REC-API-19.1).
955 As a result of the cost and load on the authorization system, this level of fine-grained
956 checking is typically only used in the most high-risk situations and only by sophisticated
957 organizations.
958 SP 800-204B [3] discusses the advantages of using a decentralized API gateway architecture
959 when implementing fine-grained authorization checks. When choosing to implement these
960 authorization policy checks under the centralized and hybrid patterns, care must be taken to
961 ensure that the gateways are not bypassed. For example, a service-level authorization policy
962 could disallow any traffic except from the API gateway as a means of defeating an attempt to
963 bypass gateway checks via pivoting inside the infrastructure.
964 REC-API-20: Traffic monitoring and policy using semantic field labels can log and monitor the
965 flow of sensitive data in a system. Further, the API Gateway can be used as a policy
2 REC-API-19.1 i focuses on requests, while this control focuses on the data that an application returns to callers in responses. They are
complementary controls.
27
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
966 enforcement point to control the flow of that data, potentially blocking traffic flows that transit
967 significant amounts of data. Ultimately, with annotations and enforcement in place, the flow of
968 sensitive data in the organization can be governed by mandatory access control (MAC) policies.
969 A MAC policy is enforced by the authorization system, regardless of the user or resource in
970 question. For example, while not explicitly stated as a hard rule in PCI DSS, a MAC policy
971 followed in implementation of systems handling PCI data is that they should be isolated from
972 systems that do not implement PCI DSS controls to maintain security and prevent potential
973 breaches. Such a MAC policy can be enforced with a combination of understanding PCI-
974 compliant services in the infrastructure and data tags on the semantic types of data that flow
975 through the system.
976 REC-API-21: Non-signature payload scanning (for generative AI APIs) analyzes request and
977 response data for sensitive information that may not be a literal attack signature. Tools typically
978 analyze (e.g., via regression, AI, simple matching and word filtering) the responses returned by
979 servers to score the risk that they contain sensitive information and take action to block that
980 traffic. Increasingly, AI agents are being deployed to assess the risk of data generated by other
981 agents. At a high level, this technique is like a web application firewall (WAF), but WAFs are
982 fundamentally signature-based, while these analyses are fundamentally content-based.
983 This is a general category of data egress analysis that is relevant across all APIs, but it has
984 become increasingly important with the growth of generative AI. Generative agents are
985 frequently trained on business-sensitive data or have insight into sensitive business operations
986 and operational data, and they are increasingly exposed to the organization and externally as
987 APIs. From the inception of generative AI agents, a variety of prompt injection attacks [22] have
988 been created to exfiltrate data via these generative models.
989 Tools for performing non-signature payload inspection should be used whenever an
990 organization is handling data returned by their system, especially when that data is generated
991 on demand (e.g., by AI agents). In most cases outside of dynamically generated output,
992 implementing simple semantic and syntactic validations (REC-API-13, REC-API-18) will typically
993 provide an organization with more risk mitigation for a lower runtime and operational cost.
994 • REC-API-21.1: Semantic data discovery tools are typically very good for identifying the
995 type of information flowing through a system (e.g., string, email address). Building the
996 inventory of APIs and the developers adopting well-defined API schemas with
997 meaningful annotations takes time. Runtime tools such as these are very helpful for
998 initial discovery, ensuring that rollout is complete across all services, and ensuring that
999 services stay in compliance after the policy is rolled out. When it is reasonable to
1000 leverage due to compute and latency constraints, an organization benefits from
1001 inspecting traffic for sensitive data flow, even beyond field-level annotations.
1002 REC-API-22: Fine-grained blocking for specific requests can prevent a DoS or service crash.
1003 These bad inputs can often trigger a cascading failure [23], but the queries may not be
1004 malicious in nature (e.g., users using the system in ways that it was not intended or designed
1005 for). In cybersecurity, this is sometimes called the “query of death” (QoD) [24]. These tools help
1006 mitigate the risks of unrestricted resource consumption and malicious input validation.
28
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1007 As a system grows in size and complexity, it is necessary to be able to pin-point block these
1008 kinds of queries to keep the system stable and available. Depending on the complexity of the
1009 query and environment, it may be possible to leverage a WAF or non-signature payload
1010 scanning tools to block some types of QoDs. However, application code changes may be
1011 required — sometimes even rearchitecting the application itself — to mitigate the impact of
1012 these kinds of queries.
1013 The detailed controls in this section fit into broad classes, and their association with the
1014 DevSecOps phases is discussed in Appendix B. This emphasizes the observation that APIs should
1015 be treated as any other software and go through an iterative, continuous life cycle.
1016
29
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
30
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1042
1043 Fig. 6. API gateway patterns
1044 Three patterns have been developed by industry to implement these capabilities:
1045 1. Centralized gateway — Protections for all APIs in the enterprise are implemented by a
1046 single shared component: an API gateway.
1047 2. Hybrid deployment — Cross-cutting policies (e.g., authentication) are implemented in
1048 the centralized shared gateway, but application-specific policies (e.g., authorization) are
1049 implemented in the application itself or by components owned by the application team.
1050 3. Decentralized gateways — All policy checks are performed by gateways dedicated to
1051 each application, often deployed beside each service instance.
1052 All three patterns can achieve all of the controls outlined in this document and be used by
1053 organizations to operate their APIs safely and confidently. Further, many of these patterns may
1054 be in use within a single organization. This section explores the engineering design trade-offs
1055 that each pattern provides in terms of risks and operational overhead.
1056 Many API gateway products provide management capabilities, such as API key issuance,
1057 discovery documentation (i.e., API definition) hosting, documentation for client developers, and
1058 support for quotas and billing tiers. These are all valuable features in the enterprise setting, but
1059 all of them can be supported across any implementation pattern and are therefore not
1060 addressed in this section.
31
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1069
1070 Fig. 7. Centralized API gateway pattern
1071 An API gateway is typically a software application that can be scaled horizontally (i.e., more
1072 instances can be deployed side by side). This is one of the reasons why an API gateway often
1073 sits behind a load balancer, even for internal service-to-service traffic use cases.
1074 Advantages of this pattern include:
1075 • A single policy enforcement point that is easy to monitor and audit, making it simple to
1076 verify that policy is enforced for all traffic that traverses the gateway.
1077 • Implementation matches the organizational structure. Typically, large organizations
1078 have a single API team that owns the centralized gateway component. That team is
1079 responsible for and able to execute on when an API is available, which API endpoints are
32
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1080 failing, whether policies are being enforced, whether the configuration up to date, and
1081 other issues.
1082 • Streamlined setup for application developers who need to “onboard” their API but do
1083 not need to deploy or maintain any additional runtime components.
1084 Disadvantages of this pattern include:
1085 • Shared fate outages. Because there is a single component, an outage of that component
1086 causes an outage for all APIs, which can be problematic for mission-critical APIs that
1087 need to operate continuously.
1088 • Noisy neighbors, where traffic consumes resources for some APIs and increases latency
1089 for all APIs. In the worst case, one application team may submit invalid configuration
1090 parameters for a service that may crash or cause DoS on the API gateway, triggering a
1091 shared fate outage for other APIs.
1092 • Long change lead times due to managing how the changes to an individual team’s API
1093 configuration impact the shared gateway. This is a frequent side-effect of controls
1094 added to mitigate shared fate outages and noisy neighbors.
1095 • Cost attribution. All requests are handled by the central gateways, and resources spent
1096 per request per API (e.g., on payload validation) are uneven. Therefore, it can be difficult
1097 to attribute API gateway runtime costs to internal application teams. This can be a
1098 problem for companies that implement an internal resource economy for planning by
1099 assigning cost centers for each application team.
1100 • Caching the results of policy decisions at runtime becomes critical when implementing
1101 the policies outlined in this SP due to the sheer number of policy checks required.
1102 Caching both increases client-perceived availability and reduces the load on key
1103 systems, like authentication and authorization. However, two layers of load balancing
1104 (i.e., network load balancer to API gateway and API gateway to service instance) tend to
1105 result in poor cache hit rates across policies enforced by the API gateway and for user
1106 data in the application layer itself. While some techniques can be used to mitigate this
1107 (e.g., distributed caches or streaming connections), they generally add additional
1108 development or operational overhead for the application team, API gateway team, or
1109 both.
1110 • Because a shared gateway is located at the perimeter, it can be bypassed (e.g., via an
1111 attacker pivoting inside the perimeter), which in turn bypasses the policy checks
1112 enforced by that gateway. This can be mitigated with techniques like service-to-service
1113 access policies that ensure that applications only receive traffic via the centralized
33
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1114 gateway or by attaching proofs (i.e., credentials) to the request that allow an application
1115 to authenticate that the request was handled by the gateway.
1128
1129 Fig. 8. Distributed gateway pattern (hybrid deployment)
34
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1130 Overall, this pattern behaves similarly to the centralized API gateway pattern, except that some
1131 of the most failure-prone parts of the centralized pattern are delegated to the application
1132 teams. This streamlines API gateway operations and enables app teams to move at their own
1133 pace. However, it also shifts the responsibility for some runtime operational and security
1134 concerns from the API gateway team to those application teams.
1135 The exact split of responsibilities between gateway and application (e.g., sidecar in a service
1136 mesh architecture) can vary greatly across different organizations based on their risk profiles
1137 and past experiences. Typically, the gateway takes responsibility for:
1138 • Authentication
1139 • Rate limiting
1140 • Circuit breaking
1141 • Service discovery
1142 • Routing
1143 • Caching
1144 • Network-level load balancing
1145 The application or dedicated gateway is responsible for:
1146 • Authorization
1147 • Request/response validation
1148 • Protocol conversion
1149 • Error handling
1150 • Application-instance load balancing
1151 Both are responsible for logging and monitoring to enable visibility into the state of the system
1152 and to ensure that policies are being enforced at runtime.
1153 There are similar advantages as the centralized gateway pattern that also include:
1154 • Mitigation of most shared-fate outages and noisy neighbors by moving the most error-
1155 prone processing like request validation out of the shared gateway and delegating to
1156 the application or dedicated gateway.
1157 • Increased iteration speed due to the ability to update configurations with less process
1158 overhead and hence quickening the time involved. This is possible due to reduced risk of
1159 shared fate outage.
1160 Disadvantages include:
1161 • The enforcement of policies is split across the API gateway and many service instances,
1162 which makes it more challenging to ensure that the policy is being enforced consistently
1163 and correctly.
35
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1164 • There is increased operational burden on application teams compared to the centralized
1165 API gateway pattern, as they are now responsible for ensuring that some policies are
1166 enforced in their application.
1167 • Not all classes of shared fate outages and noisy neighbors can be eliminated because
1168 the shared central gateway is doing at least some application layer processing.
1169 • Cost attribution is significantly improved compared to the centralized pattern because
1170 the most expensive runtime policies are implemented by the application teams.
1171 However, the centralized gateway can still be very expensive to operate at high scales
1172 and is as difficult to attribute costs as in the centralized pattern.
1173 • Caching hit rates also suffer similarly to the centralized pattern for the same reasons.
1174 • Bypassability/pivot
36
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1201 management and use its proxies to enforce those policies (i.e., API protections) at each service
1202 instance. The service’s properties [2] and use for security [3][6] have been covered in other
1203 NIST guidance documents.
1204
1205 Fig. 9. Decentralized API gateway pattern
1206
1207 Fig. 10. Service-to-service traffic flows in decentralized API gateway pattern
37
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1209 • All processing is done per application team (i.e., no noisy neighbors), and the risk of a
1210 shared fate outage is only present on the load balancer, which is a risk shared across all
1211 implementation patterns.
1212 • It has the highest rate of change for app teams because they have no external
1213 dependencies and little chance of causing outages for other teams.
1214 • A cross-cutting policy can be managed by the central API gateway team via the
1215 gateway’s control plane (e.g., with the service mesh). This pattern can be adopted
1216 harmoniously in a mixed environment, where some APIs are implemented via any of the
1217 three patterns in a single organization.
1218 • Cost attribution is straightforward and no more or less challenging than attributing any
1219 compute resource spent by teams in the organization.
1220 • Cache locality is typically better than in the other patterns because there is only a single
1221 layer of load balancing, and the gateway is co-located with the application. This means
1222 that gateway policy checks for a given user are cached alongside the application
1223 instance caching business logic data for that user. However, if a user’s request is load-
1224 balanced across multiple service instances, then “duplicate” policy checks have to be
1225 performed that would not be required in the other patterns.
1226 Disadvantages include:
1227 • Because the policy is checked and easily cached per application instance, there can be
1228 many more policy checks in the system overall. Any time a user’s request is load-
1229 balanced to a new service instance, it is highly likely that a new policy check has to be
1230 performed. This is an inherent problem in any zero-trust system, which pushes
1231 enforcement to the application instance and likely necessitates the adoption of a
1232 distributed cache managed alongside or as part of the API-serving infrastructure.
1233 • The pattern puts the most burden on application teams. Those teams have to interact
1234 with the team managing the load balancer for each API they expose and need to
1235 operate at least some of the API-serving infrastructure (e.g., making sure that they have
1236 a gateway deployed and routing). Technology like a service mesh can help simplify this,
1237 but a burden remains.
1238 • Auditing and verifying policy enforcement can be challenging as enforcement is
1239 distributed across all application instances. A robust, distributed gateway
1240 implementation (e.g., a service mesh) can help mitigate this via centralized
1241 configuration control combined with distributed enforcement and consistent telemetry.
1242 If an organization can audit and verify a hybrid gateway pattern, a distributed gateway
1243 pattern can be supported with little additional effort.
38
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
39
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1285 In line with a zero-trust posture, WAF policies should be enforced as close to the application as
1286 possible. This helps mitigate a variety of mechanisms that attackers might use to pivot within or
1287 otherwise compromise an infrastructure. As a practical matter, it can be cost-prohibitive to run
1288 a full suite of WAF mitigations on every internal and external request. This cost can be
1289 mitigated in two ways, which can be combined:
1290 1. Incorporate the WAF as part of the overall API-serving infrastructure and deploy the
1291 WAF itself in a “hybrid” model (i.e., keep a centralized WAF at the load balancer with a
1292 full suite of policies to protect against untrusted traffic). Then enforce a minimum set of
1293 app-specific WAF policies near each of the applications (e.g., in the distributed
1294 gateway). This minimizes policies run on east-west (i.e., more trusted) traffic while still
1295 sanitizing less trusted external traffic and tends to result in a good compromise of risk
1296 versus cost.
1297 2. Deploy the WAF as part of the API gateway implementation itself, which can avoid
1298 parsing the request multiple times (i.e., reduce the latency and compute costs of WAF
1299 policies), regardless of the API-serving implementation pattern chosen. If the API
1300 gateway is hybrid or distributed, then this technique can also be incorporated for
1301 further performance improvement.
40
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1323 attacks, most DDoS mitigation tools are deployed at the network edge as part of the load
1324 balancer or even before the load balancer as part of the CDN and DNS system (often called
1325 “Global Traffic Management”). Predictably, DDoS mitigation tools help mitigate unrestricted
1326 resource consumption (see Sec. 2.4).
41
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1359 protection. The key point in each pattern is identifying where to enforce each policy. These
1360 decisions result in trade-offs in runtime, architecture, and operations for the application teams
1361 utilizing the API-serving infrastructure. Many organizations use a mix of all three patterns
1362 deployed in production precisely because of those trade-offs. All three patterns can be used to
1363 successfully implement all of the controls outlined in this document. That said, the distributed
1364 gateway pattern and its companion technologies best align with the principals of zero trust and
1365 are strongly recommended for organizations that want to adopt a security-forward approach.
1366
42
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
43
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1395 References
1396 [1] U.S. Department of Defense Chief Information Officer (2024) DoD Enterprise DevSecOps
1397 Fundamentals. Version 2.5, October 2024. Available at
1398 https://dodcio.defense.gov/Portals/0/Documents/Library/DoD%20Enterprise%20DevSe
1399 cOps%20Fundamentals%20v2.5.pdf
1400 [2] Chandramouli R, Butcher Z (2020) Building Secure Microservices-based Applications
1401 Using Service-Mesh Architecture. (National Institute of Standards and Technology,
1402 Gaithersburg, MD), NIST Special Publication (SP) NIST SP 800-204A.
1403 https://doi.org/10.6028/NIST.SP.800-204A
1404 [3] Chandramouli R, Butcher Z, Aradhna C (2021) Attribute-based Access Control for
1405 Microservices-based Applications using a Service Mesh. (National Institute of Standards
1406 and Technology, Gaithersburg, MD), NIST Special Publication (SP) NIST SP 800-204B.
1407 https://doi.org/10.6028/NIST.SP.800-204B
1408 [4] Chandramouli R (2022) Implementation of DevSecOps for a Microservices-based
1409 Application with Service Mesh. (National Institute of Standards and Technology,
1410 Gaithersburg, MD), NIST Special Publication (SP) NIST SP 800-204C.
1411 https://doi.org/10.6028/NIST.SP.800-204C
1412 [5] Chandramouli R, Kautz F, Torres-Arias S (2024) Strategies for the Integration of Software
1413 Supply Chain Security in DevSecOps CI/CD Pipelines. (National Institute of Standards and
1414 Technology, Gaithersburg, MD), NIST Special Publication (SP) NIST SP 800-204D.
1415 https://doi.org/10.6028/NIST.SP.800-204D
1416 [6] Chandramouli R, Butcher Z (2023) A Zero Trust Architecture Model for Access Control in
1417 Cloud-Native Applications in Multi-Cloud Environments. (National Institute of Standards
1418 and Technology, Gaithersburg, MD), NIST Special Publication (SP) NIST SP 800-207A.
1419 https://doi.org/10.6028/NIST.SP.800-207A
1420 [7] OWASP (2023) OWASP Top 10 API Security Risks. Available at https://owasp.org/API-
1421 Security/editions/2023/en/0x11-t10/
1422 [8] OWASP (2023) API2:2023 Broken Authentication. Available at https://owasp.org/API-
1423 Security/editions/2023/en/0xa2-broken-authentication/
1424 [9] Wikipedia (2025) Fatigue Attack. Available at https://en.wikipedia.org/wiki/Multi-
1425 factor_authentication_fatigue_attack
1426 [10] Wikipedia (2024) Billion laughs attack. Available at
1427 https://en.wikipedia.org/wiki/Billion_laughs_attack
1428 [11] Wikipedia (2025) Zip bomb. Available at https://en.wikipedia.org/wiki/Zip_bomb
1429 [12] National Institute of Standards and Technology (2024) The NIST Cybersecurity
1430 Framework (CSF) 2.0. (National Institute of Standards and Technology, Gaithersburg,
1431 MD), NIST Cybersecurity White Paper (CSWP) NIST CSWP 29.
1432 https://doi.org/10.6028/NIST.CSWP.29
1433 [13] Wikipedia (2025) Principle of least astonishment. Available at
1434 https://en.wikipedia.org/wiki/Principle_of_least_astonishment
1435 [14] F# for fun and profit (2013) Designing with types: Making illegal types unrepresentable.
1436 Available at https://fsharpforfunandprofit.com/posts/designing-with-types-making-
1437 illegal-states-unrepresentable/
44
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1438 [15] Joint Task Force (2020) Security and Privacy Controls for Information Systems and
1439 Organizations. (National Institute of Standards and Technology, Gaithersburg, MD), NIST
1440 Special Publication (SP) NIST SP 800-53r5. Includes updates as of December 10, 2020.
1441 https://doi.org/10.6028/NIST.SP.800-53r5
1442 [16] Barker E (2020) Recommendation for Key Management: Part 1 – General. (National
1443 Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP)
1444 NIST SP 800-57pt1r5. https://doi.org/10.6028/NIST.SP.800-57pt1r5
1445 [17] National Institute of Standards and Technology (2019) Security Requirements for
1446 Cryptographic Modules. (Department of Commerce, Washington, D.C.), Federal
1447 Information Processing Standards Publications (FIPS) NIST FIPS 140-3.
1448 https://doi.org/10.6028/NIST.FIPS.140-3
1449 [18] rfc7519 (2015) JSON Web Token (JWT). Available at
1450 https://datatracker.ietf.org/doc/html/rfc7519
1451 [19] rfc5280 (2008) Internet X.509 Public Key Infrastructure Certificate and Certificate
1452 Revocation List (CRL) Profile. Available at https://datatracker.ietf.org/doc/html/rfc5280
1453 [20] Wikipedia (2023) Common Log Format. Available at
1454 https://en.wikipedia.org/wiki/Common_Log_Format
1455 [21] Github (2016) grpc. Available at
1456 https://github.com/grpc/grpc/blob/master/src/proto/grpc/status/status.proto
1457 [22] Wikipedia (2025) Prompt injection. Available at
1458 https://en.wikipedia.org/wiki/Prompt_injection
1459 [23] Wikipedia (2024) Cascading failure. Available at
1460 https://en.wikipedia.org/wiki/Cascading_failure
1461 [24] Infoq.com (2020) How to Avoid Cascading Failures in Distributed Systems. Available at
1462 https://www.infoq.com/articles/anatomy-cascading-failure/
1463 [25] Coreruleset.org (2025) OWASP CRS PROJECT. Available at https://coreruleset.org
1464 [26] Wikipedia (2025) Confused deputy problem. Available at
1465 https://en.wikipedia.org/wiki/Confused_deputy_problem
1466 [27] Gartner.com (2025) Cloud Web Application and API Protection. Available at
1467 https://www.gartner.com/reviews/market/cloud-web-application-and-api-protection
1468
1469
1470
1471
45
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
46
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
1503 A.3. API Classification Based on Architectural Style or Pattern (API Types)
1504 Table 1. API classification based on Architectural Patterns
47
NIST SP 800-228 ipd (Initial Public Draft) Guidelines for API Protection
March 2025 for Cloud-Native Systems
48