Technical – The Rhino Inquisitor https://www.rhino-inquisitor.com Get your insights on Salesforce Commerce Cloud B2C development! Mon, 01 Dec 2025 08:10:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://www.rhino-inquisitor.com/wp-content/uploads/2022/02/logo-wp-inquisitor.svg Technical – The Rhino Inquisitor https://www.rhino-inquisitor.com 32 32 The SFCC Quota Gauntlet: A Developer’s Survival Guide to the Top Platform Limits https://www.rhino-inquisitor.com/a-survival-guide-to-sfcc-platform-limits/ Mon, 24 Nov 2025 12:41:24 +0000 https://www.rhino-inquisitor.com/?p=14004 It is a scenario that haunts every e-commerce developer: the 3 AM pager alert. The production site is down, shoppers are seeing the dreaded “general error page,” and sales have come to a standstill. After a frantic dive into the logs, the culprit is revealed: a cryptic message about an “enforced quota violation”. This is […]

The post The SFCC Quota Gauntlet: A Developer’s Survival Guide to the Top Platform Limits appeared first on The Rhino Inquisitor.

]]>

It is a scenario that haunts every e-commerce developer: the 3 AM pager alert. The production site is down, shoppers are seeing the dreaded “general error page,” and sales have come to a standstill. After a frantic dive into the logs, the culprit is revealed: a cryptic message about an “enforced quota violation”. This is the moment every developer working on Salesforce B2C Commerce Cloud learns that, despite its power, the platform is a governed territory with strict rules.  

These laws are known as quotas. They are not arbitrary punishments designed to frustrate developers. Instead, they are the platform’s sophisticated immune system, a set of essential guardrails designed to ensure stability, performance, and fairness in a complex multi-tenant environment. Think of them as the zoning laws and building codes of a bustling digital metropolis. They prevent one tenant’s poorly designed skyscraper from blocking the sun for everyone else, ensuring the shared infrastructure—memory, CPU, database resources—remains healthy and responsive for all.  

For developers migrating from the Salesforce Core platform, it is tempting to equate these quotas with Apex Governor Limits. While the philosophy of resource protection is similar, the specifics are worlds apart. B2C Commerce has its own unique set of constraints—Object Quotas, API Quotas, and Script Timeouts—that demand a different mindset and a distinct set of architectural patterns. Understanding these is not optional; it is fundamental to survival.  

This guide serves as a definitive map through that governed territory. It moves beyond the official documentation to provide the “in-the-trenches” wisdom needed to build robust, scalable, and, most importantly, quota-proof applications on B2C Commerce. It serves as a survival guide for navigating the platform’s unspoken rules and leveraging constraints into a competitive advantage.

The Monolith's Mandates: Top 10 Quotas for Every SFCC Developer

These are the foundational limits that every Salesforce B2C Commerce developer must internalize, whether they are maintaining a legacy SiteGenesis implementation, building on the modern Storefront Reference Architecture (SFRA), or managing the backend services for a headless storefront. These are the immutable laws of the land, and ignorance is no defense when an enforced quota brings a production instance to its knees. The following table provides a quick-reference cheat sheet for the most critical quotas to keep in mind during architecture, development, and code review.

The Digital Hoarder's Downfall: Custom Objects (400,000 Limit)

The Limit: An instance can hold a maximum of 400,000 replicable (stageable) custom objects and a separate 400,000 non-replicable (non-stageable) custom objects. 

The Danger Zone: This limit, while seemingly vast, is often threatened by insidious data accumulation. Common culprits include using custom objects to store transactional data, such as detailed integration logs or granular user interaction events, which rightfully belong in an external system of record. 

Another frequent anti-pattern is the complete lack of data retention policies, allowing temporary data—such as tokens for password resets or abandoned cart information—to accumulate indefinitely until the limit is reached.

The Fallout: Once this enforced quota is hit, the consequences are severe. Any call to dw.object.CustomObjectMgr.create() will fail with an uncatchable exception. This means any feature relying on the creation of new custom objects, from saving a user’s address preference to logging a critical integration failure, will cease to function. It is a systemic failure that can cripple significant portions of a site’s custom functionality.

The Pro Move: The key to avoiding this downfall is architectural discipline and rigorous data hygiene.

  • Be Ruthless with Data: Before storing anything in a custom object, developers and architects must ask the critical question: “Is B2C Commerce the correct system of record for this data?”. If the data originates from or is primarily used by another platform (like a CRM or Marketing Cloud), it should reside there.
  • Mandate Cleanup: No custom object intended for temporary storage should be created without a corresponding, regularly scheduled purge job. A clear data retention period must be defined and enforced as part of the development lifecycle for that feature.
  • Extend, Don’t Invent: Before creating a new custom object type, developers should exhaust all possibilities of extending existing system objects with custom attributes. System objects are generally more performant and do not count against this specific quota.

 

Beyond the hard limit, there is a more subtle performance drag to consider. The documentation and best practices repeatedly warn that custom objects are not optimised for high-performance database access. This means that even when an instance is well below the 400,000 object ceiling, heavy reliance on custom objects for frequent read/write operations (e.g., a custom inventory system or a real-time logging mechanism) creates significant database churn. This churn leads to a gradual degradation of site performance—slower page loads, longer job execution times—that does not trigger a hard quota violation but insidiously damages the user experience and erodes site stability. The limit is a hard stop, but the performance penalty begins long before the wall is hit.

The Blueprint Boundary: Object Type Definitions (300 Limit)

The Limit: An instance is capped at a maximum of 300 total business object definitions. This count includes all the platform’s built-in system object types, as well as any custom object types created by developers.

The Danger Zone: This limit is rarely a concern for typical, single-brand e-commerce sites. However, it can become a very real constraint for large, complex, multi-brand organisations operating on a single B2C Commerce instance. In such environments, numerous bespoke features, each potentially demanding its own custom data model, can quickly consume the available slots for new object types.

The Fallout: The consequence is absolute: the inability to create new custom object types. This effectively halts the development of any new feature that requires a distinct data structure, putting the brakes on innovation and business agility.

The Pro Move: This strategy is rooted in effective data modelling and long-term architectural planning. Developers should practice data consolidation, designing a single, well-structured custom object with multiple attributes to represent a feature’s data, rather than creating a constellation of small, single-purpose object types. It is also good practice to regularly audit development and staging environments to remove unused custom object types that may have been created for proofs-of-concept or abandoned features, freeing up slots in the blueprint.

This limit imposes a hidden constraint on architectural freedom. While 300 types may seem generous, the platform’s system objects already occupy a significant portion of that budget. This creates a strategic tension. A developer might be tempted to create a new custom object type to keep a feature’s data cleanly isolated. However, in doing so, they consume a precious, non-renewable, instance-wide resource. This pressure forces architects to think about the long-term data model of the entire instance. A short-sighted decision to create a new object type for a minor feature today could prevent the development of a more critical, large-scale feature tomorrow. It encourages a pattern of extending existing objects, which, if not managed carefully, can lead to bloated objects with hundreds of attributes, creating their own set of maintenance challenges.

The 5-Minute Fuse: Storefront Script Execution Timeout

The Limit: Any script running within a storefront request context, such as a controller or a script module it calls, has a maximum execution time of 5 minutes (300,000 milliseconds).

The Danger Zone: This limit is typically breached by one of three culprits: highly complex, unoptimized calculations performed in real-time; synchronous calls to slow or unresponsive third-party services; or inefficient loops that iterate over massive datasets without proper optimisation.

The Fallout: The platform shows no mercy. A non-catchable ScriptingTimeoutError is thrown, the script is immediately aborted, and the user is presented with an error page. There is no opportunity for graceful recovery; the transaction is dead.

The Pro Move: Adhering to this limit requires a shift in thinking from synchronous to asynchronous processing.

  • Offload to Jobs: Any process that is not absolutely essential for the immediate, initial rendering of the page should be moved to an asynchronous job. This is the canonical pattern for tasks like order export, complex report generation, or large data synchronisations.

  • Write Efficient Code: This is a table-stakes requirement for any performance-conscious developer. Optimise loops by using efficient APIs like seekable iterators instead of loading entire large collections into memory. Cache the results of expensive or repeated operations within a single request.

  • Use the Service Framework: For all external API calls, the Service Framework is mandatory. It allows for the configuration of aggressive timeouts and circuit breakers, enabling the system to “fail fast” rather than waiting for a slow third-party service to consume the entire 5-minute budget.

A critical nuance is the interplay of different timeout contexts.

The documentation reveals a hierarchy of timeouts: 5 minutes for controllers, but a much stricter 30 seconds for contexts like OCAPI hooks or Page Designer scripts. The governing rule is that the timeout that ends earliest wins. This creates a crucial dependency chain. A controller might call a script module, which is then used within a hook. Even though the controller has a generous 5-minute budget, the hook’s 30-second budget becomes the real, effective constraint for that piece of the execution. Developers cannot think about the 5-minute limit in isolation; they must consider the entire potential call stack of a script and design for the most restrictive timeout it might encounter.

The Party Line: API HTTPClient Calls Per Page (16 Limit)

A cartoon illustration of frustrated developers crowded around a single, old-fashioned telephone, with a sign above it that reads "16 API Calls Per Page," symbolizing a resource bottleneck.
Remember old party lines? B2C Commerce's HTTPClient is similar. You only get 16 external API calls per request. If too many services try to "talk" at once, your page will throw an error. Plan your integrations wisely!

The Limit: A single storefront page request is permitted to make a maximum of 16 outbound calls using dw.net.HTTPClient.send(). The platform will log a warning starting at 10 calls.

The Danger Zone: This quota is a direct threat to “chatty” integrations. Pages that need to assemble data from numerous disparate microservices or third-party APIs are at high risk. A common scenario is a product detail page that attempts to fetch real-time shipping quotes, tax calculations, inventory levels from multiple warehouses, and personalised content, all through separate API calls.

The Fallout: An enforced quota violation will abruptly terminate the request, sending the user to an error page. The page they were trying to load will simply fail.

The Pro Move: Avoiding this limit requires a deliberate integration strategy that favours consolidation and caching over chattiness.

  • API Aggregation: When the external services are within the team’s control, the best practice is to create an aggregation layer or a Backend-for-Frontend (BFF) service. This service can receive a single request from B2C Commerce, make multiple downstream calls itself, and return a consolidated data payload in one response.

  • Aggressive Caching: Responses from external systems that do not change on every request should be aggressively cached using B2C Commerce’s custom cache framework. This avoids making a network call on every single page load for the same data.

  • Switch to Data Feeds: For data that does not require real-time updates (e.g., product specifications, warehouse inventory), the entire model should be transitioned from real-time API calls to scheduled data feed imports. Importing inventory levels every 15 minutes via a job is far more scalable and performant than hitting an inventory API on every product page view.

 

This limit of 16 calls is more than just a technical constraint; it is a powerful driving force for better architecture. It actively discourages a naive, chatty integration pattern where the storefront directly communicates with a fleet of microservices. This quota, especially when combined with the 5-minute script timeout, compels architects to adopt more robust and performant patterns, such as the API Gateway. The limit is not just about preventing resource exhaustion on the B2C Commerce side; it is about enforcing a more resilient and scalable integration architecture for the entire solution.

The Overstuffed Cart: Product Line Items Per Basket (400 Limit)

The Limit: The total number of product line items that can be contained within a single basket is 400. This includes both independent products and any dependent items, such as those in a product bundle. A warning is logged when a basket reaches 200 items.

The Danger Zone: While most B2C shoppers will never approach this limit, it is a significant and common hurdle in B2B commerce scenarios. A business customer placing a replenishment order might need to purchase hundreds of unique SKUs in a single transaction. It can also appear in B2C contexts, such as with features like “quick order” forms or when a user attempts to add a very large wishlist to their cart at once.

The Fallout: The consequence is a direct blocker to conversion. The platform will prevent the user from adding any more items to their cart once the limit is reached. The API call to add the 401st item will fail, and without graceful error handling, the user will be left confused and unable to complete their purchase.

The Pro Move: Handling this limit requires different strategies for B2B and B2C.

  • B2B Solutions: For B2B use cases, a custom solution is often required. One approach is to build logic that automatically splits a large order into multiple, smaller baskets behind the scenes, presenting it to the user as a single order confirmation. Another is to guide the user through the UI to create several smaller orders.

  • B2C Graceful Handling: For B2C, the focus is on clear user feedback. When the cart approaches the limit, the UI should display a non-intrusive message. When the limit is hit, a clear, helpful error message should explain the situation: “Your shopping cart is full. To add more items, please proceed to checkout with your current selection or save items to a wishlist for later.”

The Promotion Paradox: Enabled Promotions (10,000 Limit)

The Limit: An instance can have a maximum of 10,000 promotions in an “enabled” state.6 It is critical to note that there is an even stricter, more performance-relevant limit of 1,000 active promotions (enabled, assigned to a campaign, and currently within their run dates) at any given time.

The Danger Zone: This limit is typically threatened by large retailers with complex, overlapping, and long-running promotional strategies, compounded by a failure to perform basic data hygiene. Merchandising teams can create new promotions for every minor event without needing to revisit and clean up old or expired ones.

The Fallout: The primary risk here is not a hard error but a slow, creeping death of performance. The B2C Commerce promotion engine must evaluate all potentially applicable promotions during basket calculation. As the number of enabled—and especially active—promotions grows, this calculation becomes exponentially more complex. The result is a slowdown in the basket and checkout pipelines, which impacts the checkout experience for all users, even those not using a promotion. Hitting the hard limit is rare, but the performance degradation begins far sooner.

The Pro Move: The solution is not technical but procedural: a strict “Promotion Lifecycle Management” process must be implemented. Business users and merchandisers must be trained and required to archive promotions as soon as their effective date range has passed. From a technical monitoring perspective, the “Number of Active Promotions” quota (1,000) should be treated as the more critical performance indicator. Any sustained approach toward this number should trigger an immediate review and cleanup of the active promotion landscape.

The Unfurled Scroll: ISML Template Size (10MB Limit)

A flat cartoon illustration showing a worried developer looking at a massive, unfurling scroll that represents an ISML template, with a prominent "10MB" warning sign indicating the file size limit.
Like an ancient, endless scroll, an ISML template can grow beyond its bounds. Keep an eye on that 10MB file size limit, or your beautifully crafted template might just refuse to render, leaving you with nothing but a blank page.

The Limit: The final, rendered HTML output generated by an ISML template cannot exceed 10MB in size.

The Danger Zone: This limit is most commonly encountered on pages that render huge loops of data, such as a “view all” category page that attempts to display thousands of product tiles without any form of pagination. Another culprit can be content slots that have been bloated with excessive, unoptimized, or copy-pasted HTML from a WYSIWYG editor.

The Fallout: The platform throws a Page size limit of 10 MB exceeded exception, and the entire page processing request is cancelled. The user sees an error page.

The Pro Move: This limit enforces fundamental web performance best practices.

  • Pagination is Not Optional: All product listing pages, search result pages, and any other page that displays a potentially large list of items must implement robust and user-friendly pagination. The “Infinite scroll” feature can be utilised, but it must be implemented intelligently with asynchronous calls to fetch subsequent pages of data.

  • Lazy Loading: For content that is “below the fold” (not immediately visible to the user), use lazy loading techniques to defer the loading of that content until the user scrolls down.

Keep Logic Out of ISML: ISML templates should be used solely for presentation logic. All complex data preparation, filtering, and business logic should be handled in controller or script module files before being passed to the template. This keeps templates clean, small, and focused on rendering.

The Tiny Backpack: Session Size (10KB Limit)

The Limit: The total serialized size of all data stored in the dw.system.Session object is limited to 10KB. Furthermore, individual strings stored within the session are capped at 2,000 characters.

The Danger Zone: The most common anti-pattern that leads to violating this quota is storing large, complex objects or entire collections of data in the session. A classic mistake is to perform a product search, store the entire result set of Product objects in the session, and then attempt to read from it on the next page for rendering.

The Fallout: Exceeding the limit can lead to unpredictable behaviour, including data truncation or runtime exceptions.

The Pro Move: The session should be treated as a tiny, temporary backpack, not a storage warehouse.

  • Store Identifiers, Not Objects: The correct pattern is to store only small, primitive identifiers in the session, such as productID, customerNo, or orderID. The full objects should be re-fetched from the database or, preferably, from a cache when they are needed on a subsequent page.

     

  • Use session.privacy: For data that is specific to a user’s logged-in session and should be cleared upon logout (like temporary preferences), use the session.privacy custom attributes. The platform automatically handles the cleanup of this data.

     

The Ten Commandments of Creation: API Custom Object Creation Per Page (10 Limit)

The Limit: A maximum of 10 custom objects can be created within a single storefront page request through the dw.object.CustomObjectMgr.create() API call.

The Danger Zone: This quota is often hit by custom forms that create multiple, separate custom object records upon submission. For example, a complex user survey with ten questions where each answer is saved as an individual custom object record would hit the limit. Another common cause is a custom logging implementation that attempts to create a new custom object for every minor event that occurs during a single page view.

The Fallout: This is an enforced quota. The 11th call to create() will fail, the request will be stopped, and the user will see an error. If they were submitting a form, all the data they entered would be lost, leading to a highly frustrating experience.

The Pro Move: The key is to consolidate data. Instead of creating many small objects, developers should batch the data into a single, larger, and more structured custom object. For the survey example, all ten answers should be collected and stored as a single JSON string within a single attribute of a custom object. For logging, a third-party logging service should be used, or logs should be buffered and sent asynchronously to avoid impacting the user-facing request.

The Read-Only Rule: File I/O in Storefront (0 Limit)

The Limit: Nearly all file writing and manipulation methods in the dw.io.File and dw.io.FileWriter classes have a strict storefront limit of zero. This includes File.createNewFile(), FileWriter(), File.zip(), and others.

The Danger Zone: This is a trap for developers new to the platform who might attempt to dynamically generate a file—such as a custom PDF invoice or a CSV export of user data—and serve it for download directly from a storefront controller request.

The Fallout: This is a non-negotiable architectural constraint. The code will work perfectly in a sandbox job context but will fail with a hard, enforced quota violation in a production storefront request. It is a guaranteed failure.

The Pro Move: Any form of dynamic file generation must happen in an asynchronous job context. The canonical pattern is as follows:

  1. A user on a storefront page clicks a button to request a file (e.g., “Download My Order History”).
  2. The storefront controller does not generate the file. Instead, it creates a “token” custom object with a status of “pending” and triggers a job, passing the ID of this token object as a parameter.
  3. The user’s page receives a confirmation and begins to poll a separate, lightweight controller every few seconds, checking the status of the token object. The job executes in the background. It performs the heavy lifting of querying the data and generating the file, which it then saves to a temporary location in the WebDAV impex or temp directory. Once complete, the job updates the token custom object’s status to “complete” and adds the path to the generated file.
  4. The polling mechanism on the user’s page sees the “complete” status, retrieves the file path, and presents the user with a direct download link to the file in WebDAV.
  5. Clean up your custom objects! Remember 😇

The Headless Frontier: A PWA Kit & SCAPI Hit List

Transitioning to a headless architecture with the PWA Kit and the Salesforce Commerce API (SCAPI) represents a fundamental paradigm shift. The performance battleground moves away from the server-side rendering of ISML templates and into the realm of API response times, network latency, and the efficiency of the client-side JavaScript application. In this world, the browser is no longer a thin client rendering HTML; it is a rich application responsible for its own state management, routing, and data fetching. The quotas and limits that matter most are less about single, monolithic server requests and more about the rate, size, and efficiency of the constant communication between the client and a constellation of API endpoints.

A cartoon astronaut developer floating in digital space and interacting with a detached user interface, representing the concept of headless commerce. The background features constellations of API icons.
Welcome to the headless frontier. Here, developers are like astronauts, decoupling the front-end "head" to explore new user experiences, all powered by a universe of back-end APIs and services.

The Bouncer at the Door: SCAPI & OCI Rate Limits (HTTP 429)

The Limit: Unlike the hard-and-fast quotas of the monolith, SCAPI operates on a system of rate limiting and load shedding. There is not one single number to watch. Instead, when the platform determines that a client is making too many requests in a given timeframe, it will respond with an HTTP 429 Too Many Requests status code. Specific, high-volume API families, like Omnichannel Inventory (OCI), have very granular, published rate limits. For instance, the get-availability endpoint can handle a massive 10,000 requests every 10 seconds, while the imports endpoint is limited to just 2 requests per 10 seconds.

The Danger Zone: The primary risk factor is a high-traffic site with poorly implemented or non-existent caching for API responses. A classic example is a product list page (PLP) in a PWA Kit application where, for every user, the browser makes individual, uncached calls to the Shopper Products and OCI Availability APIs for every single product tile visible on the page. During a sales event, this can quickly overwhelm the rate limits.

The Fallout: The API client—the PWA Kit application running in the user’s browser—gets throttled. If the client-side code is not built to handle the 429 response gracefully, the UI will simply fail to load the required data. Users will see endless loading spinners, empty components, or jarring error messages, resulting in a broken and untrustworthy user experience.

The Pro Move: Resilience is the name of the game.

  • Honor Retry-After: The 429 response is often accompanied by a Retry-After header, which specifies the number of seconds the client should wait before attempting to reconnect. Client-side code must be built to respect this header. The best practice is to implement an exponential backoff strategy, where the delay between retries increases with each subsequent failure, thereby preventing a “thundering herd” of retries from exacerbating the problem.

  • Embrace Client-Side Caching: Modern libraries like React Query, which is a standard part of the PWA Kit, are essential. They provide sophisticated client-side caching, automatically preventing the application from making redundant API calls for data that it has recently fetched and that has not yet been invalidated.

  • Leverage CDN Caching for APIs: For API endpoints that return public, non-personalised data (e.g., product details for a guest user), the PWA Kit’s Managed Runtime proxy can be configured to cache the API JSON response at the CDN edge. This is achieved by changing the request path prefix from /proxy/ to /caching/. 

This shift to rate-limiting signals a profound change in responsibility. In a traditional SFRA architecture, the server owns the execution and is responsible for handling errors that may occur. If a quota is hit, the server throws a fatal exception. In the headless SCAPI world, the platform simply puts its hand up and says “no more for now” with a 429 code. The responsibility for handling this rejection and maintaining a coherent, resilient user experience shifts almost entirely to the client-side application. Headless development is not just about a different frontend technology; it requires a more sophisticated level of client-side engineering, with a deep understanding of state management, asynchronous error handling, and fault-tolerance patterns.

The Gatekeeper's Toll: SLAS Rate Limits

The Limit: The Shopper Login and API Access Service (SLAS), which governs all authentication and authorisation for Shopper APIs, has its own distinct, high-level rate limits: 24,000 requests per minute (RPM) for production tenants and 500 RPM for non-production tenants.

The Danger Zone: The most common way to violate this limit is with a poorly configured client application that requests a new guest user token on every single API call, rather than caching and reusing the token it has already received.

The Fallout: Authentication fails. Guest shoppers are unable to obtain the necessary JWT to perform actions such as adding items to a cart, and registered users are unable to log in or refresh their sessions. Essentially, all authenticated or basket-related e-commerce functionality grinds to a halt.

The Pro Move: Token management is paramount.

  • Token Caching is Mandatory: A standard SLAS JWT is valid for 30 minutes. The client application must be designed to cache this token (e.g., in browser local storage) and include it in the Authorization header of all subsequent API calls. A new token should only be requested when the current one does not exist or is nearing its expiration time. 

  • Master the Refresh Token Flow: For registered users, the client should use the provided refresh token to obtain a new access token in the background seamlessly, without requiring the user to re-authenticate. Developers should be aware of the latest security enhancements, such as mandatory refresh token rotation, which prohibits the reuse of a refresh token after it has been used once.

The 30-Second Lifeline: PWA Kit Managed Runtime Proxy Timeout

The Limit: Any request that is proxied through the PWA Kit’s Managed Runtime to an external, third-party API is subject to a hard, non-configurable timeout of 30 seconds.

The Danger Zone: This becomes a problem when using the proxy to make a synchronous call to a system known for slow response times. This could be a legacy ERP system for a complex price lookup or a third-party service that provides real-time, computationally intensive freight shipping calculations.

The Fallout: If the origin server does not respond within 30 seconds, the Managed Runtime will terminate the connection and return an HTTP 504 Gateway Timeout error to the client application. This happens regardless of whether the origin server is still processing the request. From the user’s perspective, the operation has failed.

The Pro Move: This is a firm architectural boundary. If a service cannot guarantee a response in under 30 seconds, it cannot be called synchronously during a user-facing interaction that flows through the proxy. The solution is to adopt the same asynchronous, polling-based pattern used for file generation in the monolith: trigger a background process via a quick API call, and have the client poll a separate endpoint for the result.

The Hook's Handcuffs: OCAPI Hook Script Timeout

The Limit: While SCAPI is the future, many implementations still use OCAPI, especially during a phased migration. Scripts executed within OCAPI hooks (e.g., dw.ocapi.shop.basket.beforePOST, afterPOST) are constrained by a very tight 30-second execution timeout. 

The Danger Zone: It is a common temptation for teams migrating from SFRA to lift heavy business logic from their old controllers and drop it directly into an OCAPI hook to modify API behaviour. This could include complex custom price adjustments, intricate promotion applications, or calls to multiple external systems.

The Fallout: If the script in the hook exceeds the 30-second limit, the entire OCAPI API call fails, typically returning a generic 500 error to the client. The hook’s logic is aborted mid-execution, which carries the additional risk of leaving data in an inconsistent state.

The Pro Move: Hooks must be treated with extreme prejudice. They should be kept as lightweight as possible, used only for minor data modifications, simple validations, or setting a value. Any process that involves significant computation, database lookups, or external network calls should be moved out of the hook and into a separate, dedicated custom API endpoint that can be called asynchronously by the client or handled by a backend job.

The Two-Tier System: Custom API Constraints

The Limit: The Custom APIs framework, which allows developers to extend SCAPI with their own endpoints, is built on a fundamental two-tier system: “Shopper” endpoints and “Admin” endpoints. Shopper endpoints are designed for high-scale, low-latency, user-facing interactions and are therefore subject to stricter, unpublished limits on execution runtime and request/response body size. They must be associated with a siteId and are secured using SLAS tokens. Admin endpoints, by contrast, are more permissive but require authentication via Account Manager OAuth and are intended for backend or administrative tasks.   

The Danger Zone: The primary mistake is attempting to perform a data-intensive or long-running operation through a Shopper-scoped Custom API. This might include a user-triggered request to generate a large, custom data export or a complex calculation over a customer’s entire order history.

The Fallout: The request will likely encounter an uncatchable platform timeout or resource limit, resulting in the operation failing for the user.

The Pro Move: The architecture of any custom API must be designed with this split in mind from day one.

  • Shopper Endpoints: Utilise for high-frequency, low-latency operations integral to the core user journey (e.g., retrieving a custom piece of data for the product page).

  • Admin Endpoints: Use for heavy, backend processes or administrative functions that might be called by a custom Business Manager module, an external system, or an asynchronous job (e.g., triggering a bulk data synchronisation).

The Ghost in the Machine: dw.system.Session in a Headless World

The Limit: This is not a formal quota but a critical architectural anti-pattern. The dw.system.Session object is a construct of the B2C Commerce server-side web tier. In a pure, stateless SCAPI architecture, there is no server session. However, in hybrid architectures or during a phased migration from SFRA, developers may be tempted to use OCAPI session bridging or pass the dwsid session cookie in custom headers to SCAPI calls to maintain a semblance of the old session-based state. 

The Danger Zone: Relying on the server-side session to manage state in an application that is supposed to be headless. This creates a tight, brittle coupling to the old monolithic architecture, negating many of the benefits of going headless, such as the independent scaling and deployment of the frontend. It is a recipe for bizarre and hard-to-debug caching and state management bugs.

The Fallout: Unpredictable behaviour is the main outcome. A user’s state may seem to vanish and then reappear, as subsequent API requests are load-balanced to different server nodes in the B2C Commerce grid, some of which may not yet have the user’s session data replicated. Caching at the CDN level becomes nearly impossible, as every request is uniquely personalised by the presence of the dwsid cookie.

The Pro Move: The server session must be avoided at all costs in a headless build. State management is the responsibility of the client application, utilising tools such as React’s built-in state and context APIs or more advanced libraries like Redux. State should be persisted by making explicit API calls, for example, saving the user’s cart contents via the Shopper Baskets API. For personalisation, the modern, stateless approach is to use the Shopper Context API, which allows context (like customer group or source code) to be passed directly in the API call itself, influencing the response without relying on a sticky server session.

Conclusion: From Quota-Fearing to Quota-Fluent

The extensive landscape of quotas and limits within Salesforce B2C Commerce Cloud should not be viewed as a field of landmines for developers. Instead, these constraints are a design partner, a set of rules that, when understood and respected, guide the development of applications that are inherently more performant, stable, and scalable.

The most successful B2C Commerce developers are not those who discover clever workarounds to circumvent limits—a path that inevitably leads to performance bottlenecks, maintenance nightmares, and platform instability. The true experts are those who architect their solutions to thrive within these boundaries. A quota-fluent developer internalises these limits and uses them as a lens through which they evaluate every technical decision. They write efficient code, choose the right tool for the job (a real-time API versus an asynchronous job), cache intelligently at every layer, and have a deep understanding of the unique constraints of their chosen architecture, whether it’s a traditional monolith or a modern headless application.

The path to this fluency begins with proactive monitoring. The Quota Status dashboard in Business Manager should be a regular destination, not just a reactive tool used during an outage. Every WARN level message in the quota logs should be treated as a valuable, early signal—an opportunity to refactor, optimise, and improve before a minor inefficiency becomes a major production incident. By embracing these limits as a core part of the development process, teams can transition from being quota-fearing to quota-fluent —a critical step in mastering the art of building elite e-commerce experiences on the Salesforce platform.

The post The SFCC Quota Gauntlet: A Developer’s Survival Guide to the Top Platform Limits appeared first on The Rhino Inquisitor.

]]>
Taming the Beast: A Developer’s Deep Dive into SFCC Meta Tag Rules https://www.rhino-inquisitor.com/taming-the-beast-a-developers-deep-dive-into-sfcc-meta-tag-rules/ Mon, 04 Aug 2025 07:13:04 +0000 https://www.rhino-inquisitor.com/?p=13525 Most of us have glanced at the "Page Meta Tag Rules" section in Business Manager, shrugged, and moved on to what we consider 'real' code. That's a mistake. This isn't just another BM module for merchandisers to tinker with; it's a declarative engine for automating one of the most tedious and error-prone parts of e-commerce SEO. It’s a strategic asset for developers to build scalable, maintainable, and SEO-friendly sites.

The post Taming the Beast: A Developer’s Deep Dive into SFCC Meta Tag Rules appeared first on The Rhino Inquisitor.

]]>

At some point in your Salesforce B2C Commerce Cloud career, you’ve been handed The Spreadsheet. It’s a glorious, terrifying document, often with 10,000+ rows, meticulously crafted by an SEO team. Each row represents a product, and each column contains the perfect, unique meta title and description destined to win the favour of the Google gods. Your heart sinks. You see visions of tedious data imports, endless validation, and the inevitable late-night fire drill when someone screams, “The staging data doesn’t match production!”.

Most of us have glanced at the “Page Meta Tag Rules” section in Business Manager, shrugged, and moved on to what we consider ‘real’ code. That’s a mistake. This isn’t just another BM module for merchandisers to tinker with; it’s a declarative engine for automating one of the most tedious and error-prone parts of e-commerce SEO. It’s a strategic asset for developers to build scalable, maintainable, and SEO-friendly sites.

This guide will dissect this powerful feature from a developer’s perspective. We’ll tame this beast by exploring its unique syntax, demystifying the “gotchas” of its inheritance model, and outlining advanced strategies for PDPs, PLPs, and even those tricky Page Designer pages. By the end, you’ll know how to leverage this tool to make your life easier and your SEO team happier, all without accidentally nuking their hard work.

The Anatomy of a Rule: Beyond the Business Manager UI

The first mental hurdle to clear is that Meta Tag Rules are not an imperative script. They are a declarative system. You are not writing code that executes line by line. Instead, you are defining a set of instructions—a recipe—that the platform’s engine interprets to generate a string of text. This distinction is fundamental because it dictates how these rules are built, tested, and debugged.

It’s a specialised, declarative Domain-Specific Language (DSL), not a general-purpose scripting environment like Demandware Script. This explains why you can’t just call arbitrary script APIs from within a rule and why the error feedback is limited. It’s about defining what you want the output to be and letting the platform’s engine figure out how to generate it.

The Three Pillars of Rule Creation

The process of creating a rule within Business Manager at Merchant Tools > SEO > Page Meta Tag Rules can be broken down into three logical steps :

Meta Tag Definitions (The "What")

A screenshot of the meta tag rule definitions screen in the Business Manager showing the description, og:url, robots, and title meta tag definition.

This is where you define the type of HTML tag you intend to create. Think of it as defining the schema for your output. You specify the Meta Tag Type (e.g., name, property, or title for the <title> tag) and the Meta Tag ID (e.g., description, keywords, og:title). For a standard meta description, the Type would be name and the ID would be description, which corresponds to <meta name="description"...>.

Rule Creation & Scopes (The "How" and "Where")

A screenshot of the Create Entry modal, displaying the form used to create a new rule for a specific scope, in this case, the Product Detail page.

This is the core logic. You create a new rule, give it a name, and associate it with one of the Meta Tag IDs you just defined. Critically, you must select a Scope. The scope (e.g., Product, Category/PLP, Content Detail/CDP) is the context in which the rule is evaluated. It determines which platform objects and attributes are available to your rule’s syntax. 

For example, the Product object is available in the Product scope, but not in the Content Listing Page scope.

Assignments (The "Who")

meta tag rule assignments sfcc

Once a rule is defined, you must assign it to a part of your site. You can assign a rule to an entire catalog, a specific category and its children, or a content folder. This assignment triggers the platform to use your rule for the designated pages.

The Syntax Cheat Sheet: Your Rosetta Stone

A futuristic, glowing blue holographic Rosetta Stone displaying various code symbols and syntax, representing a cheat sheet for a complex language.
Don't let the unique syntax of SFCC's Meta Tag Rules intimidate you. Think of this cheat sheet as your Rosetta Stone, unlocking the ability to create powerful, dynamic, and SEO-friendly tags for your entire site.

The rule engine has its own unique syntax, which is essential to master. All dynamic logic must be wrapped in ${...}. 

  • Accessing Object Attributes: The most common action is pulling data directly from platform objects. The syntax is straightforward: Product.name, Category.displayName, Content.ID, or Site.httpHostName. You can access both system and custom attributes, though some data types like HTML, Date, and Image are not supported.

  • Static Text with Constant(): To include a fixed string within a dynamic expression, you must use the Constant() function, such as Constant('Shop now at '). This is vital for constructing readable sentences.

Mastering Conditional Logic

The real power of the engine lies in its conditional logic. This is what allows for the creation of intelligent, fallback-driven rules.

  • The IF/THEN/ELSE Structure: This is the workhorse of the rule engine. It allows you to check for a condition and provide different outputs accordingly.

  • The Mighty ELSE (The Hybrid Enabler): The ELSE operator is the key to creating a “hybrid” approach that respects manual data entry. A rule like ${Product.pageTitle ELSE Product.name} first checks for a value in the manually-entered pageTitle attribute. If, and only if, that field is empty, it falls back to using the product’s name. This single technique is the most important for preventing conflicts between automated rules and manual overrides by merchandisers. 

  • Combining with AND and OR: These operators allow for more complex logic. AND requires both expressions to be true, while OR requires only one. They also support an optional delimiter, like AND(' | '), which elegantly joins two strings with a separator, but only if both strings exist. This prevents stray separators in your output.

  • Equality with EQ: For direct value comparisons, use the EQ operator. This is particularly useful for logic involving pricing, for instance, to check if a product has a price range (ProductPrice.min EQ ProductPrice.max) or a single price.

The Cascade: Understanding Inheritance, Precedence, and the Hybrid Approach

The Meta Tag Rules engine was designed with the “Don’t Repeat Yourself” (DRY) principle in mind. The inheritance model, or cascade, allows you to define a rule once at a high level, such as the root of your storefront catalog, and have it automatically apply to all child categories and products. This is incredibly efficient, but only if you understand the strict, non-negotiable lookup order the platform uses to find the right rule for a given page.

I’m not going to go into much detail here, as a complete fallback system is documented.

The Golden Rule: Building Hybrid-Ready Rules

The most common and damaging pitfall is the “Accidental Override.” Imagine a merchandiser spends days crafting the perfect, keyword-rich pageTitle for a key product. A developer then deploys a seemingly helpful rule like ${Product.name} assigned to the whole catalog. Because the rule is found and applied, it will silently overwrite the merchandiser’s manual work.

This isn’t just a technical problem; it’s a failure of process and collaboration. The platform’s inheritance model and conditional syntax force a strategic decision about data governance: will SEO be managed centrally via rules, granularly via manual data entry, or a hybrid of both? The developer’s job is not just to write the rule but to implement the agreed-upon governance model.

The solution is the Hybrid Pattern, which should be the default for almost every rule you create.

Example Hybrid PDP Title Rule: ${Product.pageTitle ELSE Product.name} | ${Site.displayName}

Let’s break down how the engine processes this:

  1. Product.pageTitle: The platform first checks the product object for a value in the pageTitle attribute. This is the field merchandisers use for manual entry in Business Manager (or hopefully imported from a third-party system).

  2. ELSE: If, and only if, the pageTitle attribute is empty or null, the engine proceeds to the expression after the ELSE operator. If pageTitle has a value, the rule evaluation stops, and that value is used.

This pattern provides the best of both worlds: automation and scalability for the thousands of products that don’t need special attention, and precise manual control for the high-priority pages that do. Adopting this pattern as a standard practice is the key to a harmonious relationship between development and business teams.

Advanced Strategies and Best Practices

Once you’ve mastered the fundamentals of syntax and inheritance, you can begin to craft mighty rules that go far beyond simple title generation.

Crafting Killer Rules: Practical Examples

The Perfect PDP Title (Hybrid)

Combines the product’s manual title, or falls back to its name, brand, and the site name. 

${Product.pageTitle ELSE Product.name AND Constant(' - ') AND Product.brand AND Constant(' | ') AND Site.displayName}

Scenario 1 (Manual pageTitle exists):

    Data: Product.pageTitle = “Best Trail Running Shoe for Rocky Terrain”
    Generated Output: Best Trail Running Shoe for Rocky Terrain

Scenario 2 (No manual pageTitle, falls back to dynamic pattern):

    Data:
    Product.name = “SummitPro Runner”
    Product.brand = “Peak Performance”
    Site.displayName = “GoOutdoors”

    Generated Output: SummitPro Runner - Peak Performance | GoOutdoors

The Engaging PLP Description (Hybrid)

Checks for a manual category description, otherwise generates a compelling, dynamic sentence. 

${Category.pageDescription ELSE Constant('Shop our wide selection of ') AND Category.displayName AND Constant(' at ') AND Site.displayName AND Constant('. Free shipping on orders over $50!')}

Scenario 1 (Manual pageDescription exists):

    Data: Category.pageDescription = “Explore our premium, all-weather tents. Designed for durability and easy setup, perfect for solo hikers or family camping trips.”

    Generated Output: Explore our premium, all-weather tents. Designed for durability and easy setup, perfect for solo hikers or family camping trips.

Scenario 2 (No manual pageDescription, falls back to dynamic pattern):

    Data:
    Category.displayName = “Camping Tents”
    Site.displayName = “GoOutdoors”

    Generated Output: Shop our wide selection of Camping Tents at GoOutdoors. Free shipping on orders over $50!

Dynamic OpenGraph Tags

Create separate rules for og:title and og:description using the same hybrid patterns. For og:image, you can access the product’s image URL. 

${ProductImageURL.viewType} (Note: The specific viewtype is needed, e.g. large)

    Scenario: A user shares a product page on a social platform.
    Data: The system has an image assigned to the product in the ‘large’ slot.
    Generated Output: https://www.gooutdoors.com/images/products/large/PROD12345_1.jpg

Dynamic OpenGraph Tags

This is a truly advanced use case that demonstrates how rules can implement sophisticated SEO strategy. This rule helps prevent crawl budget waste and duplicate content issues by telling search engines not to index faceted search pages. 

${IF SearchRefinement.refinementColor OR SearchPhrase THEN Constant('noindex,nofollow') ELSE Constant('index,follow')}

Scenario 1 (User refines a category by color):

A user is on the “Backpacks” category page and clicks the “Blue” color swatch to filter the results.

    Data: SearchRefinement.refinementColor has a value (“Blue”).
    Generated Output: noindex,nofollow
    Result: This filtered page won’t be indexed by Google, saving crawl budget.

Scenario 2 (User performs a site search):

A user types “waterproof socks” into the search bar.

    Data: SearchPhrase has a value (“waterproof socks”).
    Generated Output: noindex,nofollow
    Result: The search results page won’t be indexed.

Scenario 3 (User lands on a standard category page):

A user navigates directly to the “Backpacks” category page without any filters.

    Data: SearchRefinement.refinementColor is empty AND SearchPhrase is empty.
    Generated Output: index,follow
    Result: The main category page will be indexed by Google as intended.

The Page Designer Conundrum: The Unofficial Unofficial Workaround

Here we encounter a significant limitation: out of the box, the Meta Tag Rules engine does not work with standard Page Designer pages. The underlying Page API object lacks the necessary pageMetaTags. This creates a significant gap for sites that rely on content marketing and campaign landing pages built in Page Designer.

Luckily, an already complete working “workaround” example has been created by David Pereira here

The Minefield: Warnings, Pitfalls, and Troubleshooting

While powerful, the Meta Tag Rules engine is a minefield of potential “gotchas” that can frustrate developers and cause real business impact if not anticipated.

  • Warning – The “Accidental Override”: This cannot be overstated. A simple, non-hybrid rule (${Product.name}) deployed to production can instantly nullify months of careful, manual SEO work by the merchandising team. The Hybrid Pattern (${Product.pageTitle ELSE...}) is your shield. Always use it. This is fundamentally a process failure, not just a technical one, highlighting the need for a clear “contract” between development and business teams about who owns which data.

  • Pitfall – The “30-Minute Wait of Despair”: When you save or assign a rule in Business Manager, it can take up to 30 minutes for the change to appear on the storefront. This is due to platform-level caching. This delay is a classic initiation rite for new SFCC developers who are convinced their rule is broken. The solution is patience: save your rule, then go get a coffee before you start frantically debugging. (Note: I personally have never had to wait this long)

  • Pitfall – The Empty Attribute Trap: If your rule references an attribute (Product.custom.seoKeywords) that is empty for a particular product, the engine treats it as a null/false value. This can cause your conditional logic to fall through to an ELSE condition you didn’t expect. This underscores that the effectiveness of your rules is directly dependent on the quality and completeness of your catalog and content data.

Troubleshooting the "Black Box"

You cannot attach the Script Debugger to the rule engine or step through its execution. Troubleshooting is a process of indirect observation.

  1. Step 1: Preview in Business Manager: Your first and best line of defense. The SEO module has a preview function that lets you test a rule against a specific product, category, or content asset ID. This gives you instant feedback on the generated output without affecting the live site.

  2. Step 2: Inspect the Source: The ultimate source of truth is the final rendered HTML. Load the page on your storefront, right-click, and select “View Page Source.” Search for <title> or <meta name="description"> to see exactly what the engine produced.

  3. Step 3: The Code-Level Safety Net: As a developer integrating the rules into templates, you have one final check. The dw.web.PageMetaData object, which is populated by the rules, is available in the pdict. You can use the method isPageMetaTagSet('description') within an <isif> statement in your ISML template. This allows you to render a hardcoded, generic fallback meta tag directly in the template if, for some reason, the rule engine failed to generate one.

The Performance Question: Debunking the Myth

A common concern is that complex nested IF/ELSE rules might slow down page load times, but this is mostly a myth. The real performance issue relates to caching. For cached pages, the impact on performance is nearly nonexistent because the server evaluates the rule only once when generating the page’s HTML during the initial request. This HTML is then stored in the cache. Future visitors receive this pre-rendered static HTML directly from the cache, skipping re-evaluation. The small performance cost only occurs on cache misses. Thus, the focus shouldn’t be on creating overly simple rules but on maintaining a high cache hit rate. 

We can be confident that the Salesforce team has developed an effective feature to guarantee optimal performance. Keep in mind the platform cache with a 30-minute delay we previously mentioned. Within that “black box,” a separate system is likely also in place to protect performance.

The Final Verdict: Meta Tag Rules vs. The Alternatives

When deciding how to manage SEO metadata in SFCC, developers face three philosophical choices:

Manual Entry Only (The Control Freak)

  • Manually populating the pageTitle, pageDescription, etc., for every item in Business Manager.

    • Pros: Absolute, granular control. Perfect for a small catalog or a handful of critical landing pages.

    • Cons: Completely unscalable. Highly prone to human error and data gaps. A maintenance and governance nightmare for any site of significant size.

Custom ISML/Controller Logic (The Re-inventor)

Ignoring the rule engine and writing your own logic in controllers and ISML templates to build meta tags.

  • Pros: Theoretically unlimited flexibility. You can call external services, perform complex calculations, etc..  

  • Cons: You are re-inventing a core platform feature, which introduces significant technical debt. The logic is completely hidden from business users, making it a black box that only developers can manage. It’s harder to maintain and creates upgrade path risks.

Meta Tag Rules (The Pragmatist)

Using the native feature as intended.

  • Pros: The standard, platform-supported, scalable, and maintainable solution. The logic is transparent and manageable by trained users in Business Manager. It fully supports the hybrid approach, offering the perfect balance of automation and control.

  • Cons: You are constrained by the built-in DSL. It has known limitations, like the Page Designer issue and syntax, that may require custom workarounds.

What about the PWA Kit?

Yes, you can absolutely continue to leverage the power of Page Meta Tag Rules from the Business Manager in a headless setup. The key is understanding that your headless front end (like a PWA) communicates with the SFCC backend via APIs. 

While historically this might have required a development task to extend a standard API or create a new endpoint to expose the dynamically generated meta tag values, this is becoming increasingly unnecessary. Salesforce is actively expanding the Shopper Commerce API (SCAPI), continuously adding new endpoints and enriching existing ones to expose more data directly.

This ongoing expansion, as seen with enhancements to APIs like Shopper Search and Shopper Products, means that the SEO-rich data generated by your rules is more likely to be available out of the box. Instead of building custom solutions, the task for developers is shifting towards simply querying the correct, updated SCAPI endpoint. 

This evolution makes it easier than ever to fetch the meta tags for these pages. It validates the headless approach, allowing you to maintain a robust, centralised SEO strategy in the Business Manager while fully embracing the flexibility and performance of a modern front-end architecture.

sfcc updates headless apis for meta tag rules

Conclusion: Go Forth and Automate

Salesforce B2C Commerce Cloud’s Page Meta Tag Rules are far more than a simple configuration screen. They are a strategic tool for building scalable, efficient, and collaborative e-commerce platforms. By mastering the hybrid pattern, understanding the inheritance cascade, knowing how to tackle limitations like the Page Designer gap, and—most importantly—communicating with your business teams, you can transform SEO from a manual chore into an automated powerhouse.

So, the next time that dreaded SEO spreadsheet lands in your inbox, don’t sigh and start writing an importer. Crack open the Page Meta Tag Rules, build some smart, hybrid rules, and go grab a coffee. You’ve just saved your future self hundreds of hours of pain.

You’re welcome.

The post Taming the Beast: A Developer’s Deep Dive into SFCC Meta Tag Rules appeared first on The Rhino Inquisitor.

]]>
Field Guide to Custom Caches: Wielding a Double-Edged Sword https://www.rhino-inquisitor.com/field-guide-to-custom-caches-in-sfcc/ Mon, 28 Jul 2025 07:32:55 +0000 https://www.rhino-inquisitor.com/?p=13328 You think you know caching. You’ve enabled page caching, fiddled with content slot TTLs, and called it a day. And your Salesforce B2C Commerce Cloud site is still slower than a snail in molasses. Why? Because you're ignoring the most potent weapon in your performance arsenal: the Custom Cache.

The post Field Guide to Custom Caches: Wielding a Double-Edged Sword appeared first on The Rhino Inquisitor.

]]>

You think you know caching. You’ve enabled page caching, fiddled with content slot TTLs, and called it a day. And your Salesforce B2C Commerce Cloud site is still slower than a snail in molasses. Why? Because you’re ignoring the most potent weapon in your performance arsenal: the Custom Cache.

Custom Caches are a double-edged sword, though. Wielded with discipline, precision, and a deep understanding of their limitations, they are one of the most potent performance-tuning instruments in your arsenal. Wielded carelessly, they will cut you, your application, and your customer’s experience to ribbons. The problem is that the platform’s API for dw.system.CacheMgr is deceptively simple, masking a minefield of architectural traps for the unwary developer.   

This is not a beginner’s tutorial. This is a field guide for the professional SFCC developer who needs to move beyond basic usage and master this powerful, perilous feature. We’re going to charge headfirst into the complexity, expose the sharp edges, and arm you with the patterns and discipline required to use Custom Caches safely, effectively, and with confidence. 

The Lay of the Land: Choosing Your Data Store

Before you even think about writing CacheMgr.getCache(), you need to understand its purpose. Using the wrong tool for the job is the first step toward disaster. 

In SFCC, you have several options for storing temporary data, and choosing the correct one is a foundational architectural decision.

Custom Cache vs. Page Cache: A Quick Primer

Developers new to the platform frequently conflate Custom Caches and the Page Cache. They are fundamentally different beasts operating at different layers of the architecture. Mistaking one for the other is like using a hammer to turn a screw.

  • Page Cache is for caching rendered output. It operates at the web server tier and stores full HTTP responses—typically HTML fragments generated from ISML templates. You control it with the <iscache> tag or the response.setExpires() script API method. When a request hits a URL whose response is in the Page Cache, the web server serves it directly, never even bothering the application server. It is incredibly fast and is the primary defence against high traffic for storefront pages.

  • Custom Cache is for caching application data. It operates at the application server tier and stores JavaScript objects and primitives inside a script or controller’s execution context. You control it exclusively through the dw.system.CacheMgr script API. It’s designed to avoid recalculating expensive data or re-fetching it from an external source during the execution of a controller that will ultimately produce a response.

The distinction is critical: Cache the final, cooked meal with Page Cache, cache the raw ingredients with Custom Cache. To avoid re-rendering a product tile’s HTML, use Page Cache with a remote include. If you need to avoid re-fetching the product’s third-party ratings data before you render the tile, use a Custom Cache.

The Developer's Dilemma: request vs. session vs. CacheMgr

Within the application tier, you have three primary ways to store temporary, non-persistent data during script execution. Their scopes and lifetimes are vastly different, and choosing the wrong one can lead to performance degradation, security vulnerabilities, or bizarre bugs.

  • request.custom: This object lives for the duration of a single HTTP request. It is the most ephemeral of the scopes. Its primary purpose is to pass data between middleware steps in an SFRA controller chain or from a controller to the rendering template within the same server call. It’s a scratchpad for the current transaction and nothing more.
  • session.custom / session.privacy: These objects live for the duration of a user’s session. The platform defines this with a 30-minute soft timeout (which logs the user out and clears privacy data) and a six-hour hard timeout (after which the session ID is invalid). This scope is user-specific and sticky to a single application server. The critical difference is that writing to session.custom can trigger a re-evaluation of the user’s dynamic customer groups, while session.privacy does not. Data in session.privacy is also automatically cleared on logout.
  • dw.system.CacheMgrThis is an application-wide, server-specific cache. The data is shared by all users and all sessions that happen to land on the same application server. Its lifetime is determined either by a configured time-to-live (TTL) or until a major invalidation event occurs, such as a code activation or data replication.

 

The Forge: Mechanics of a Custom Cache

Once you’ve determined that a Custom Cache is the right tool, implementation requires a precise, methodical approach. There is no room for improvisation. Follow these steps as a mandatory checklist.

The Blueprint: Defining Caches in caches.json

Image Alt Text: A friendly cartoon character in a flat vector style, building a data cache from a blueprint, with vibrant data lines flowing into the structure.

Your cache’s life begins with a simple declaration. This is done in a JSON file, conventionally named caches.json, which must reside within your cartridge.       

1. Create caches.json: Inside your cartridge, create the file. For example: int_mycartridge/caches.json.

2. Define Your Caches: The file contains a single JSON object with a caches key, which is an array of cache definitions. Each definition requires an id and can optionally include an expireAfterSeconds property.

				
					{
  "caches": [
    {
      "id": "UnlimitedTestCache"
    },
    {
      "id": "TestCacheWithExpiration",
      "expireAfterSeconds": 10
    }
  ]
}
				
			

The id must be globally unique across every single cartridge in your site’s cartridge path. A duplicate ID will cause the cache to silently fail to initialize, with the only evidence being an error in the logs. The expireAfterSeconds sets a TTL for entries in that cache. If omitted, entries have no time-based expiration and persist until the next global cache clear event.

3. Register in package.json: The platform needs to know where to find your definition file. Reference it in your cartridge’s package.json using the caches key. The path is relative to the package.json file itself.

				
					{
    "caches": "./caches.json"
}
				
			

4. Enable in Business Manager: Finally, you must globally enable the custom cache feature. Navigate to Administration > Operations > Custom Caches and check the “Enable Caching” box.  Disabling this will clear all custom caches on the instance. This page will also become your primary tool for monitoring cache health.

A screenshot of the "Administration > Operations > Custom Caches" screen in the business manager.
A screenshot of the "Administration > Operations > Custom Caches" screen in the business manager.

The Core API Arsenal: CacheMgr and Cache

The script API for interacting with your defined caches is straightforward, revolving around two classes: dw.system.CacheMgr and dw.system.Cache.  

  • CacheMgr.getCache(cacheID)This is your entry point. It retrieves the cache object that you defined in caches.json. 

  • cache.put(key, value): Directly places an object into the cache under a specific key, overwriting any existing entry.

  • cache.get(key): Directly retrieves an object from the cache for a given key. It returns undefined if the key is not found. 

  • cache.invalidate(key): Manually removes a single entry from the cache.

While these methods are simple, using them directly is a beginner’s trap. A typical but flawed pattern is 

if (!cache.get(key)) { cache.put(key, loadData()); }

This code is not atomic. On a busy server, two concurrent requests could both evaluate the if condition as true, both execute the expensive loadData() function, and one will wastefully overwrite the other’s result. This is inefficient and can lead to race conditions.

The "Get-or-Load" Pattern: The Only Way to Populate Your Cache

There is a better way. It is the (in my opinion) only acceptable way to read from and write to a custom cache: the cache.get(key, loader) method.

This method combines the get and put operations into a single, atomic action on the application server. It attempts to retrieve the value for a key. If it’s a miss, it executes the loader callback function, places the function’s return value into the cache, and then returns it. If the loader function returns undefined (not null), the failure is not cached. This keeps your logic clean and concise. (And hopefully, behind that black box, the concurrency conundrum has been taken care of 😇)

Here is the implementation for fetching data from a third-party API:

				
					var CacheMgr = require('dw/system/CacheMgr');
var MyHTTPService = require('~/cartridge/scripts/services/myHTTPService');
var Site = require('dw/system/Site');

/**
 * Retrieves data for a given API endpoint, utilizing a custom cache.
 * @param {string} apiEndpoint - The specific API endpoint to call.
 * @returns {Object|null} - A plain JavaScript object with the API data, or null on failure.
 */
function getApiData(apiEndpoint) {
    // Retrieve the cache defined in caches.json
    var apiCache = CacheMgr.getCache('ExternalRatingsAPI');

    // Construct a robust, unique cache key
    var cacheKey = Site.current.ID + '_api_data_' + apiEndpoint;

    // Use the get-or-load pattern.
    var result = apiCache.get(cacheKey, function() {
        // This loader function only executes on a cache miss for this specific key.
        var service = MyHTTPService.getService();
        var serviceResult = service.call({ endpoint: apiEndpoint });

        // Check for a successful result before caching
        if (serviceResult.ok && serviceResult.object) {
            // IMPORTANT: Return a simple JS object, not the full service result.
            // This prevents caching large, complex API objects.
            try {
                return JSON.parse(serviceResult.object.text);
            } catch (e) {
                // Failed to parse, don't cache the error.
                return undefined;
            }
        }

        // Returning undefined prevents caching a failure.
        return undefined;
    });

    return result;
}
				
			

The Art of the Key: Your Cache's True Identity

Developers often obsess over the value being cached, but this is a strategic error. The value is just data; the key is the entire strategy. A poorly designed key will lead to cache collisions (serving wrong data), or cache misses (negating any performance benefit). 

An anti-pattern, such as adding a dynamic and irrelevant product position parameter to a product tile’s cache key, can lead to a near-zero hit rate, rendering the cache completely useless.

The Anatomy of a Perfect Key

A robust cache key is not just a string; it’s a self-documenting, collision-proof identifier. Every key you create should be:

  1. Unique: It must uniquely identify a single piece of cacheable data.

  2. Predictable: You must be able to deterministically reconstruct the exact same key whenever you need to access the data.

  3. Scoped: It must contain all the context necessary to distinguish it from similar data for other sites, locales, or conditions.

A highly effective pattern is to build keys from concatenated, delimited parts: PURPOSE::SCOPE::IDENTIFIER::CONTEXT.

  • Bad Key: '12345' (What is it? A product? A category? For which site?)

  • Good Key: 'product_tile_data::RefArch_US::12345_blue::en_US'

This structure prevents a product cache from colliding with a content cache, ensures data for the US site doesn’t leak into the EU site, and makes debugging from logs infinitely easier because the key itself tells you exactly what it’s for. Always include Site.current.ID and the current locale for any site- or language-specific data.

The Complexity of Excess

While it might seem clever to make your cache key highly specific and unique, this can backfire by reducing the chances of cache hits.

Striking the right balance is key. (pun intended)

I’ve also seen situations where the effort spent retrieving extensive data from the database to craft the key ends up cancelling out the performance benefits of custom caching.
After all, if generating the key takes longer than the cache saves, it’s time to rethink the approach.

The Serialization Conundrum: Caching API Objects vs. POJOs

You must not cache raw SFCC API objects. Never put a dw.catalog.Product, dw.order.Order, or dw.catalog.ProductInventoryList object directly into the cache.

While the documentation ambiguously states that “tree-like object structures” can be stored, this is a siren song leading to disaster. These API objects are heavyweight, carry live database connections, are not truly serializable, and can easily blow past the 128KB per-entry size limit, causing silent write failures that are only visible in the logs. 

The only performant and safe approach is to map the data you need from the heavy API object into a lightweight Plain Old JavaScript Object (POJO) or Data Transfer Object (DTO) before caching it.

Anti-Pattern: Caching the Full API Object

				
					// DO NOT DO THIS
var ProductMgr = require('dw/catalog/ProductMgr');
var productCache = CacheMgr.getCache('ProductData');

productCache.get('some-product-id', function () {
    var product = ProductMgr.getProduct('some-product-id');
    return product; // Caching the entire, heavy dw.catalog.Product object
});
				
			

Correct Pattern: Caching a Lightweight POJO

				
					// THIS IS THE CORRECT WAY
var ProductMgr = require('dw/catalog/ProductMgr');
var productCache = CacheMgr.getCache('ProductData');

productCache.get('some-product-id', function () {
    var product = ProductMgr.getProduct('some-product-id');
    if (!product) {
        // We store null in the cache
        return null;
    }

    // Create a lightweight POJO with only the data you need
    var productPOJO = {
        id: product.ID,
        name: product.name,
        shortDescription: product.shortDescription? product.shortDescription.markup : '',
        price: product.priceModel.price.value
    };

    return productPOJO; // Cache the small, clean object
});
				
			

This approach creates smaller, faster, and safer cache entries. It decouples your cached data from the live object model and respects the platform’s limitations.

The release notes even mention that custom caches are intended to return immutable objects, reinforcing that you should be working with copies of data, not live API instances. 

In the Trenches: Real-World Battle Plans

With the theory and mechanics established, let’s apply them to the most common scenarios where custom caches provide the biggest performance wins.

Use Case 1: Taming External API Latency

This is the poster child for custom caches. Your site needs to display real-time shipping estimates, user-generated reviews, or social media feeds from a third-party service. Making a live HTTP call on every page load is a recipe for a slow, unreliable site. By wrapping the service call in the “get-or-load” pattern, you can cache the response for a few minutes, drastically reducing latency and insulating your site from temporary blips in the third-party service’s availability. 

Remember, there’s another option I mentioned in a previous article: using the ServiceRegistry for caching.

Use Case 2: Caching Expensive Computations

Some business logic is just plain expensive. The classic example is determining if a main product should display an “On Sale” banner by iterating through all of its variation products to check their promotion status. On a product grid page with 24 products, each with 10 variants, this could mean hundreds of object inspections just to render the page. This is a perfect candidate for a custom cache.

Calculate the result once, store the simple boolean result in a cache with a key like'main_promo_status::' + mainPid, and set a reasonable TTL (e.g., 15 minutes) to align with promotion update frequencies.

Use Case 3: "Configuration as Code"

Instead of fetching site-level configurations or feature switches directly from the database through Site Preferences or Custom Objects, you can load these configurations into a custom cache helper function that loads this data into a long-lived custom cache on the first request; subsequent requests will retrieve the configuration directly from memory. 

This approach significantly reduces the load on the database while providing lightning-fast access to configuration data.

The Minefield: Warnings, Anti-Patterns, and How to Survive

Now for the most crucial section of this guide. Understanding these pitfalls is what separates a developer who uses caches effectively from one who creates production incidents.

The Great Myth: Cross-Server Invalidation

Let this be stated as clearly as possible: There is no reliable, built-in mechanism to invalidate a single custom cache key across all application servers in a production environment.

The cache.invalidate(key) method is a trap. It is functionally useless for ensuring data consistency on a multi-server POD. It only clears the key on the single application server that happens to execute the code. The other 2, 5, or 10 servers in the instance will continue to happily serve the stale data until their TTL expires or a global event occurs.

The only ways to reliably clear a custom cache across an entire instance are these “sledgehammer” approaches :  

  • Data Replication: A full or partial data replication will clear all custom caches.

  • Code Activation: Activating a new code version clears all custom caches.

  • Manual Invalidation: A Business Manager user navigating to Administration > Operations > Custom Caches and clicking the “Clear” button for a specific cache (for each app server).

This limitation has profound architectural implications. It means you must design your caching strategy around time-based expiration (expireAfterSeconds). You have to accept and plan for a window of potential data staleness. Do not attempt to build a complex, event-driven invalidation system (e.g., trying to have a job invalidate a key). It is doomed to fail in a multi-server environment.

Caching User-Specific Data

A cardinal sin. Never put Personally Identifiable Information (PII) or any user-specific data in a global custom cache. It is a massive security vulnerability and functionally incorrect, as the data will be shared across all users on that server. 

Use session.privacy for user-specific data.

The Rogue's Gallery: Other Common Pitfalls

  • Ignoring the 20MB Total Limit: This is a hard limit for all custom caches on a single application server. One misbehaving cache that stores massive objects can pollute the entire 20MB space, causing the eviction of other, well-behaved caches. 

  • Ignoring the 128KB Entry Limit: Trying to put an object larger than 128KB will result in a “write failure” that is only visible in the Business Manager cache statistics and custom logs. It does not throw an exception, so your code will appear to work while the cache remains empty.

  • Assuming Cache is Persistent: It is transient, in-memory storage. It is not a database. A server restart, code deployment, or random eviction can wipe your data at any time. Your code must always be able to function correctly on a cache miss.

The Watchtower: Monitoring Your Cache's Health

You cannot manage what you do not measure. A “set it and forget it” approach to caching is irresponsible. You must actively monitor the health and performance of your caches.

Reading the Tea Leaves: The Business Manager Custom Caches Page

Your primary dashboard is located at Administration > Operations > Custom Caches. This page lists all registered caches and provides statistics for the last 15 minutes on the current application server. The key metrics to watch are:

  • Hits / Total: This is your hit ratio. For a frequently accessed cache, this number should be very high (ideally 95%+). A low hit ratio means your cache is ineffective. This could be due to poorly designed keys, a TTL that is too short, or constant cache clearing.

  • Write Failures: This number must be zero. A non-zero value is a critical alert. It almost certainly means you are violating the 128KB per-entry size limit, likely by trying to cache a full API object instead of a POJO.

  • Clear Button: The manual override. Use it when you need to force a refresh of a specific cache’s data across all application servers.

A Debugging Workflow: From Dashboard to Code

When you identify a performance problem, follow this systematic process to diagnose cache-related issues :

  1. Observe (Production): Start in Reports & Dashboards > Technical. Sort by “Percentage of Processing Time” or “Average Response Time” to find your slowest controllers and remote includes. These are your top suspects. Note their cache hit ratios in the report. A low hit ratio on a slow controller is a huge red flag.

  2. Hypothesize (Business Manager): Go to the Custom Caches page. Does the slow controller use a custom cache? Is that cache showing a low hit rate or, worse, write failures? This helps correlate the storefront performance issue with a specific cache’s health.

  3. Reproduce & Pinpoint (Development): Switch to a development instance. Use the Pipeline Profiler to get a high-level timing breakdown of the suspect controller. This tool confirms which parts of the request are slow, but it does not show cached requests. To dig deeper into the code itself, use the  

  4. Code Profiler. Run the uncached controller and look for the specific script lines or API calls that consume the most execution time. This will tell you exactly what expensive operation needs to be wrapped in a cache call.

Wielding the Cache with Confidence

Custom Caches are not inherently good or bad. They are powerful. And like any powerful tool, they demand respect, understanding, and discipline. The path to mastery is not through memorising API calls, but through internalising a set of non-negotiable principles.

  1. Cache Data, Not HTML: Use Custom Cache for application data, Page Cache for rendered output.

  2. Choose the Right Scope: Understand the difference between request, session, and cache. Misuse is costly.

  3. The Key is the Strategy: Be deliberate and systematic in how you name things. A good key is self-documenting and collision-proof.

  4. Embrace “Get-or-Load”: The cache.get(key, loader) pattern is the only safe and atomic way to populate a cache. Use it. Always.

  5. Cache POJOs, Not API Objects: Map heavy API objects to lightweight POJOs before caching to save memory and avoid errors.

  6. Accept the Invalidation Myth: Granular, cross-server invalidation is not a feature. Design around TTL and embrace a small window of potential staleness.

  7. Monitor Relentlessly: Use the Business Manager dashboards and profilers to keep a constant watch on your cache’s health.

By adhering to these rules, you transform the custom cache from a source of unpredictable bugs into a reliable, high-performance asset.

The post Field Guide to Custom Caches: Wielding a Double-Edged Sword appeared first on The Rhino Inquisitor.

]]>
Session Sync Showdown: From plugin_slas to Native Hybrid Auth in SFRA and SiteGenesis https://www.rhino-inquisitor.com/slas-in-sfra-or-sitegenesis/ https://www.rhino-inquisitor.com/slas-in-sfra-or-sitegenesis/#comments Thu, 24 Jul 2025 20:52:39 +0000 https://www.rhino-inquisitor.com/?p=617 Headless APIs have been available in Salesforce B2C Commerce Cloud for some time, under the “OCAPI (Open Commerce API.).” In 2020, a new set of APIs, known as the SCAPI (Salesforce Commerce API), was introduced. Within that new set of APIs, a subset was focused on giving developers complete control of the login process of […]

The post Session Sync Showdown: From plugin_slas to Native Hybrid Auth in SFRA and SiteGenesis appeared first on The Rhino Inquisitor.

]]>

Headless APIs have been available in Salesforce B2C Commerce Cloud for some time, under the “OCAPI (Open Commerce API.).” In 2020, a new set of APIs, known as the SCAPI (Salesforce Commerce API), was introduced.

Within that new set of APIs, a subset was focused on giving developers complete control of the login process of customers, called SLAS (Shopper Login And API Access Service). In February 2022, Salesforce also released a cartridge for SFRA, enabling easy incorporation of SLAS within your current setup.

But let’s cut to the chase. The plugin_slas cartridge (which we will discuss later in the article) was a necessary bridge for its time, but it also introduced performance bottlenecks, API quota concerns, and maintenance headaches. 
With the release of native Hybrid Authentication, Salesforce has fundamentally changed the game for hybrid SFRA/Composable storefronts. This guide is your in-depth exploration of the “why” and “how”—we’ll dissect the architectural shift and equip you with the strategic insights you need.

What is SLAS?

A diagram showing the different steps of the SLAS process.

But what is SLAS, anywho? It is a set of APIs that allows secure access to Commerce Cloud shopper APIs for headless applications.

Some use-cases:

  • Single Sign-On: Allow your customers to use a single set of log-ins across multiple environments (Commerce Cloud vs. a Community Portal)

  • Third-Party Identity Providers: Use third-party services that support OpenID like Facebook or Google.

Why use SLAS?

Looking at the above, you might think: “But can’t I already do these things with SFRA and SiteGenesis?”

In a way, you’re right. These login types are already supported in the current system. However, they can’t be used across other applications, such as Endless Aisle, kiosks, or mobile apps, without additional development. You will need to create custom solutions for each case.

SLAS is a headless API that can be used by all your channels, whether they are Commerce Cloud or not.

Longer log-in time

People familiar with Salesforce B2C Commerce Cloud know that the storefront logs you out after 30 minutes of inactivity. Many projects have requested a longer session, especially during checkout, as this can be particularly frustrating. 

Previously, extending this timeout wasn’t possible. Now, with SLAS, you can increase it up to 90 days! Yes, you read correctly—a significant three-month extension compared to previous options!

The Old Guard: A Necessary Evil Called plugin_slas

To understand where we’re going, we have to respect where we’ve been. When Salesforce B2C Commerce Cloud began its push into the headless and composable world with the PWA Kit, a significant architectural gap emerged. 

The traditional monoliths, Storefront Reference Architecture (SFRA) and SiteGenesis, managed user sessions using a dwsid cookie. The new headless paradigm, however, operates on a completely different authentication mechanism: the Shopper Login and API Access Service (SLAS), which utilises JSON Web Tokens (JWTs).

For any business looking to adopt a hybrid model—keeping parts of their site on SFRA while building new experiences with the PWA Kit—this created a jarring disconnect. How could a shopper’s session possibly persist across these two disparate worlds?

The Problem It Solved: A Bridge Over Troubled Waters

Salesforce’s answer, released in February 2022, was the plugin_slas cartridge. It was designed as a plug-and-play solution for SFRA that intercepted the standard login process. Instead of relying on the traditional dw.system.Session script API calls for authentication, the cartridge rerouted these flows through SLAS. This clever maneuver effectively “bridged” the two authentication systems, allowing a shopper to navigate from a PWA Kit page to an SFRA checkout page without losing their session or their basket.  

For its time, the cartridge was a critical enabler. It unlocked the possibility of hybrid deployments and introduced powerful SLAS features to the monolithic SFRA world, such as integration with third-party Identity Providers (IDPs) like Google and Facebook, as well as the much-requested ability to extend shopper login times from a paltry 30 minutes to a substantial 90 days.

The Scars It Left: The True Cost of the Cartridge

While the plugin_slas cartridge solved an immediate and pressing problem, it came at a significant technical cost. Developers on the front lines quickly discovered the operational friction and performance penalties baked into its design.

  • The Performance Tax: The cartridge introduced three to four remote API calls during login and registration. These weren’t mere internal functions; they involved network-heavy SCAPI and OCAPI calls used for session bridging. This design resulted in noticeable latency during the crucial authentication phase. Every login, registration, and session refresh experienced this delay, impacting user experience.

  • The API Quota Black Hole: This was perhaps the most challenging issue for development teams, especially when the quota limit was still 8 – this is now 16, luckily. B2C Commerce enforces strict API quotas that cap the number of API calls per storefront request. The plugin_slas cartridge could consume four, and in some registration cases, even five API calls just to log in a user.

    Using nearly half of the API limit for authentication alone was a risky strategy. This heavily restricted other vital operations, such as retrieving product information, checking inventory, or applying promotions, all within the same request. It led to constant stress and compelled developers to create complex, fragile workarounds.

  • The Maintenance Quagmire: As a cartridge, plugin_slas was yet another piece of critical code that teams had to install, configure, update, and regression test. When Salesforce released bug fixes or security patches for the cartridge, it required a full deployment cycle to get them into production. This added operational overhead and introduced another potential point of failure in the authentication path, a path that demands maximum stability and security. The cartridge was a tactical patch on a strategic problem, and its very architecture—an external add-on making remote calls back to the platform—was the root cause of its limitations.

The New Sheriff in Town: Platform-Native Hybrid Authentication

A classic-style robot labeled "plugin_slas cartridge" hands a glowing purple key to a sleek, modern robot labeled "Hybrid Authentication." They are standing on a path leading from a quaint town labeled "SFRA" to a futuristic city skyline, under a bright, sunny sky.
The transition to the future of authentication, as the classic "plugin_slas cartridge" passes the key to newest "Hybrid Authentication."

Recognising the limitations of the cartridge-based approach, Salesforce went back to the drawing board and engineered a proper, strategic solution. Released with B2C Commerce version 25.3, Hybrid Authentication is not merely an update; it is a fundamental architectural evolution.

What is Hybrid Auth, Really? It's Not Just a Cartridge-ectomy

Hybrid Authentication is best understood as a platform-level session synchronisation engine. It completely replaces the plugin_slas cartridge by moving the entire logic of keeping the SFRA/SiteGenesis dwsid and the headless SLAS JWT is synced directly into the core B2C Commerce platform. 

This isn’t a patch or a workaround; it’s a native feature. The complex dance of bridging sessions is no longer the responsibility of a fragile, API-hungry cartridge but is now handled automatically and efficiently by the platform itself.

The Promised Land: Core Benefits of Going Native

For developers and architects, migrating to Hybrid Auth translates into tangible, immediate benefits that directly address the pain points of the past.

  • Platform-Native Data Synchronisation: The session bridging process is now an intrinsic part of the platform’s authentication flow. This means no more writing, debugging, or maintaining custom session bridging code. It simply works out of the box, managed and maintained by Salesforce.

  • A Seamless Shopper Experience: By eliminating the clunky, multi-call process of the old cartridge, the platform ensures that session state is synchronised far more reliably and with significantly less latency. The nightmare scenario of a shopper losing their session or basket when moving between a PWA Kit page and an SFRA page is effectively neutralised. This seamlessness extends beyond just the session, automatically synchronising Shopper Context data and “Do Not Track” (DNT) preferences between the two environments.

  • Full Support for All Templates: Hybrid Authentication is a first-class citizen for both SFRA and, crucially, the older SiteGenesis architecture. This provides a fully supported, productized, and stable path toward a composable future for all B2C Commerce customers, regardless of their current storefront template.

Is The Promised Land Free of Danger?

As with any new feature or solution, early adoption often means less community support initially, and you may encounter unique issues as one of the first partners or customers.

Therefore, it’s essential to review all available documentation and thoroughly test various scenarios in testing environments, such as a sandbox or development environment, before deploying to production.

Hardening Your Security Posture for 2025 and Beyond

The security landscape for web authentication is constantly evolving. The migration to Hybrid Auth presents a perfect opportunity to not only simplify your architecture but also to modernise your security posture and ensure compliance with the latest standards.

The 90-Day Session: A Convenience or a Liability?

While this extended duration is highly convenient for users on trusted personal devices, such as mobile apps, it remains a significant security liability on shared or public computers. If a user authenticates on a library computer, their account and personal data could be exposed for up to three months. 

The power to configure this timeout lies within your SLAS client’s token policy. It is strongly recommended that development, security, and legal teams collaborate to define a session duration that strikes an appropriate balance between user convenience and risk. For most web-based storefronts, a much shorter duration, such as 1 to 7 days, is a more prudent and secure choice.

Modern SLAS Security Mandates You Can't Ignore

Since the plugin_slas cartridge was first introduced, Salesforce has rolled out several security enhancements that are now effectively mandatory. Failing to address them during your migration will result in a broken or insecure implementation.

  • Enforcing Refresh Token Rotation: This is a major change, aligning with the OAuth 2.1 security specification. For public clients, which include most PWA Kit storefronts, SLAS now prohibits the reuse of a refresh token. When an application uses a refresh token to get a new access token, the response will contain a new refresh token. The application must store and use this new refresh token for subsequent refreshes. Attempting to reuse an old refresh token will result in a 400 'Invalid Refresh Token' error. The plugin_slas cartridge had to be updated to version 7.4.1 to support this, and any custom headless frontend must be updated to handle this rotation logic.  

  • Stricter Realm Validation: To enhance security and prevent misconfiguration, SCAPI requests now undergo stricter validation to ensure the realm ID in the request matches the assigned short code for that realm. A mismatch will result in a 404 Not Found error.

  • Choosing the Right Client: Public vs. Private: The fundamental rule of OAuth 2.0 remains paramount. If your application cannot guarantee the confidentiality of a client secret (e.g., a client-side single-page application or a native mobile app), you must use a public client. If the secret can be securely stored on a server (e.g., in a traditional web app or a Backend-for-Frontend architecture), you should use a private client.

Because the migration to Hybrid Auth requires touching authentication code on both the SFCC backend and the headless frontend, it is the ideal and necessary time to conduct a full security audit. The migration project’s scope must include updating your implementation to meet these new, stricter standards.

Conclusion: Be the Rhino, Not the Dodo

Migrating from the plugin_slas cartridge to native Hybrid Authentication is not just a simple version bump or a minor refactor; it is a strategic architectural upgrade. It’s an opportunity to pay down significant technical debt, reclaim lost performance, eliminate API quota anxiety, and dramatically simplify your hybrid architecture. 

This shift is a clear signal of Salesforce’s commitment to making the composable and hybrid developer experience more robust, stable, and platform-native. By embracing foundational platform features, such as Hybrid Authentication, over temporary, bolt-on cartridges, you are actively future-proofing your implementation and aligning with the platform’s strategic direction. Don’t let your hybrid architecture become a relic held together by legacy code. 

Be the rhino: charge head-first through the complexity and build on the stronger foundation the platform now provides.

The post Session Sync Showdown: From plugin_slas to Native Hybrid Auth in SFRA and SiteGenesis appeared first on The Rhino Inquisitor.

]]>
https://www.rhino-inquisitor.com/slas-in-sfra-or-sitegenesis/feed/ 1
The Ultimate SFCC Guide to Finding Your POD Number https://www.rhino-inquisitor.com/the-sfcc-guide-to-finding-pod-numbers/ Mon, 21 Jul 2025 05:05:51 +0000 https://www.rhino-inquisitor.com/?p=13311 Knowing your POD number isn't just trivia; it's a critical piece of operational intelligence. It's the key to configuring firewalls, anticipating maintenance and troubleshooting effectively.

The post The Ultimate SFCC Guide to Finding Your POD Number appeared first on The Rhino Inquisitor.

]]>

As a Salesforce B2C Commerce Cloud developer, you operate within a sophisticated, multi-tenant cloud architecture. While Salesforce masterfully handles the underlying infrastructure, there are times when you need to peek behind the curtain. One of the most common—and often surprisingly elusive—pieces of information you’ll need is your instance’s POD number.

Knowing your POD number isn’t just trivia; it’s a critical piece of operational intelligence. It’s the key to configuring firewalls, anticipating maintenance, troubleshooting effectively, and optimising performance. This guide is your definitive resource for uncovering that number. We’ll explore every method available, through clever UI tricks, so you can master your environment.

What is an SFCC POD, and Why Should You Care?

Before we dive into the “how,” let’s establish the “what” and “why.” In the Salesforce B2C Commerce ecosystem, a POD (Point of Delivery) is not just a single server. It is a complete, self-contained infrastructure cluster hosting the multi-tenant Software as a Service (SaaS) application. Think of it as a group of hardware—including firewalls, load balancers, application servers, and storage systems—that multiple customers share. Salesforce manages this grid, continually adding new PODs and refurbishing existing ones to balance loads, enhance performance, and improve disaster recovery capabilities.

This SaaS model is a significant advantage, enabling your team to focus on building exceptional storefronts instead of managing hardware. 

Salesforce also frequently performs “POD moves,” migrating entire customer realms to new hardware to ensure performance and reliability. By treating the POD as a transient, infrastructure-level detail rather than a permanent, customer-facing setting, Salesforce maintains the flexibility to manage the grid without requiring constant configuration changes on your end.

This means that for developers, finding the POD number is an act of reconnaissance. We must learn how to query the system’s current state. Here’s why this knowledge is indispensable:

  • Firewall & Integration Configuration: This is the most frequent reason you’ll need your POD number. When setting up integrations with third-party systems, such as payment gateways, Order Management Systems (OMS), or tax providers, their security policies often require you to allowlist the outbound IP addresses from your SFCC instances. These IP addresses are specific to the POD on which your realm resides. For a seamless transition during a potential POD move, it is best practice to allowlist the IPs for both your current POD and its designated Disaster Recovery (DR) POD at all times. (We’ll explain where to find those later)

  • Understanding Maintenance Schedules: Salesforce announces maintenance windows and incidents on its Trust site on a per-POD basis. Knowing your POD number is the only way to accurately anticipate downtime for your Primary Instance Group (PIG), allowing you to plan releases and testing cycles effectively.

  • Troubleshooting & Support: When diagnosing elusive connectivity issues, performance degradation, or other strange behaviour, knowing the POD is a crucial data point. It’s one of the first things you should check, and it’s vital information to include when opening a support case with Salesforce to expedite a resolution.

  • Performance Optimisation: In the modern era of composable storefronts, performance is paramount. For sites built with the PWA Kit and Managed Runtime, deploying your Progressive Web App (PWA) to a Managed Runtime region that is geographically close to your data’s POD is critical for minimising latency and delivering the fast page loads that customers expect.

The Shift to Hyperforce: What It Means for PODs

Salesforce is fundamentally changing its infrastructure by migrating B2C Commerce Cloud to Hyperforce, its next-generation platform built on public cloud technology. This strategic move away from traditional Salesforce-managed data centres allows for greater scalability, enhanced security, and improved performance by leveraging the global reach of public cloud providers. For anyone working with SFCC, understanding this transition is crucial, as it marks a significant evolution in how the platform is architected and managed. The core takeaway is that the classic concept of a static, identifiable POD is becoming a thing of the past for realms on Hyperforce.

With the adoption of Hyperforce, the architecture is far more dynamic. Your SFCC instance is no longer tied to a single, fixed data centre or a specific POD number that can be easily identified through a URL or IP address lookup. This means that many of the clever methods currently used to pinpoint your POD will no longer be reliable once your realm is migrated.

Instead of a predictable POD, your instance operates within a more fluid public cloud environment.

The UI Sleuth: Finding Your POD with a Few Clicks

For those times when you need a quick answer, this browser-based methods are is your best friends. (Yes, we went from plural to singular)

Method 1: The Custom Maintenance Page Trick

This is a clever, indirect method that leverages the way Business Manager generates preview links. It’s highly reliable for determining the POD of your PIG instances (Development, Staging, Production).

    1. Log in to the Business Manager of the instance you want to investigate.

    2. Navigate to Administration > Site Development > Custom Maintenance Pages.

    3. In the Preview section, you will see links for your various storefronts. If you don’t have a maintenance page uploaded, you must upload one first. You can download a template from this same page and create a simple .zip file to enable the preview links.

    4. Locate the (Production) link.

    5. Do not click the link. Instead, hover your mouse cursor over it.

    6. Look at your browser’s status bar (usually in the bottom-left corner). It will display the destination URL, and within that URL, you will find the POD number.

      For example, the URL might look something like https://pod185.production.demandware.net/..., clearly indicating you are on POD 185.

Method 2: The (lightning) PIG instance footer

By far the easiest and quickest option to explain.

Go to your staging, development, or production instance, log in, and finally look at the bottom right of any page to see the POD number in the footer!

The Account Manager Prerequisite

While you cannot find the POD number directly in Account Manager, it is the source for prerequisite information you will need for other methods, particularly when contacting support. Users with the Account Administrator role are the only ones who can access this information. 

To find your Realm and Organization IDs:

  1. Log in to Account Manager at https://account.demandware.com.

  2. Navigate to the Organization tab.

  3. Open your organization and in the Assigned Realms section, you can find your 4-letter Group ID and the alphanumeric Realm ID.  

Keep this information handy. It’s essential for identifying your environment when interacting with Salesforce systems and support teams.

Method 3: The Legacy Log Center URL (A History Lesson)

This method is now largely historical (migrated in 2023), but it remains important for context, especially if you work on older projects or encounter references to it in internal documentation.

Before the 2023 migration to a centralised logging platform, each POD had a dedicated Log Center application. The URL format explicitly included the POD number :

https://logcenter-<POD-No.><Cylinder>-hippo.demandware.net/logcenter

The <Cylinder> value was also significant: 00 for a SIG (your sandboxes) and 01 for a PIG (Dev, Staging, Prod). 

The platform’s evolution toward a more abstracted, public cloud infrastructure is evident in this instance. The old Log Center URL was tied directly to a specific hardware group (hippo.demandware.net), reflecting a more rigid infrastructure. 

The new, centralised Log Centre decouples logging from the specific POD where an instance runs, using regional endpoints instead (e.g., AMER, EU, APAC). This shift is a classic pattern in modern cloud services, favouring centralised, scalable functions over hardware-specific endpoints.

Although this legacy URL is no longer a reliable method for active discovery, understanding its history offers insight into the platform’s architectural evolution.  

The Official Channels: Guaranteed but Less Immediate

A friendly rhino in a 2D flat cartoon style, similar to Salesforce illustrations, walks towards an official building with a cloud logo, representing going to official channels for trusted information.
On the right path: Getting information from the official source.

When you need an officially sanctioned answer or want to monitor the health of your environment, these are the channels to use.

Method 4: Consulting Salesforce Support (The Ultimate Fallback)

This is your most authoritative source. Salesforce Support can provide all realm information, including the current POD number. This is the best route to take when other methods are inconclusive or when you need an official record for compliance or audit purposes. To make the process efficient, open a support case and provide your Organization ID and Realm ID from the outset. 

Support will also be the primary source of information during a planned POD move.

Using the Salesforce Trust Site (For Monitoring, Not Discovery)

A common misconception is that the Salesforce Trust site can be used to find your POD (Point of Delivery) number. 

This is incorrect. 

The Trust site is where you go to check the status of a POD you already know. Once you’ve identified your POD number using one of the methods above, you can visit https://status.salesforce.com/products/B2C_Commerce_Cloud, find your POD in the list, and subscribe to notifications for maintenance and incidents.

The Official POD Lists

Salesforce maintains official knowledge base articles that list all PODs, their general locations (e.g., USA East – VA, Japan, …), their DR (Disaster Recovery) POD counterparts, and their outgoing IP addresses. These are invaluable reference documents.

You should use these lists in conjunction with the other discovery methods. For example, once the maintenance page URL indicates that you are on POD 126, you can consult the AMER list to find that its location is Virginia, its DR POD is 127, and its primary outbound IP address is 136.146.57.33.

Mastering Your Environment

Knowing how to find your POD number is more than a technical trick. It’s a sign of a developer who understands the platform on a deeper level. It empowers you to configure integrations with confidence, anticipate operational changes, and troubleshoot with precision.

The post The Ultimate SFCC Guide to Finding Your POD Number appeared first on The Rhino Inquisitor.

]]>
Image-ine: Salesforce B2C Commerce Cloud DIS for Developers https://www.rhino-inquisitor.com/image-ine-sfcc-dis-for-developers/ Mon, 14 Jul 2025 06:44:24 +0000 https://www.rhino-inquisitor.com/?p=13242 in the wild, wild west of e-commerce, images aren't just pretty pictures. They're your silent sales force, your conversion catalysts, and your SEO superheroes. Shoddy, slow-loading visuals? That's a one-way ticket to "bounce rate hell" and a brand image that screams "we tried." But fear not, intrepid developer! Salesforce B2C Commerce Cloud's Dynamic Image Service (DIS) is here to rescue your storefront from visual mediocrity and transform it into a high-octane, pixel-perfect masterpiece.

The post Image-ine: Salesforce B2C Commerce Cloud DIS for Developers appeared first on The Rhino Inquisitor.

]]>

In the wild, wild west of e-commerce, images aren’t just pretty pictures. They’re your silent “sales force” (☺), your conversion catalysts, and your SEO superheroes. Shoddy, slow-loading visuals? That’s a one-way ticket to “bounce rate hell” and a brand image that screams, “We tried.”

But fear not. Salesforce B2C Commerce Cloud’s Dynamic Image Service (DIS) is here to help. Keep in mind that this built-in tool has several tricks up its sleeve, but might not always be the best fit for your project, so keep reading! 

So, what exactly is DIS magic?

Imagine a world where you upload one glorious, high-resolution image, and then, poof!—it magically transforms into every size, shape, and format your storefront could ever dream of, all on the fly.
That, my friends, is the core enchantment of Salesforce B2C Commerce Cloud’s Dynamic Imaging Service (DIS). It’s designed to eliminate the nightmare of manually resizing, cropping, and uploading numerous image variants for every product view.

Instead of a digital assembly line of pre-processed images, DIS acts like a master chef. You provide it with the finest ingredients (your single, high-res source image), and when a customer’s browser requests a specific dish—say, a tiny thumbnail for a search result or a sprawling, detailed shot for a product page—DIS delivers it instantly. No waiting, no fuss – just the right-sized image, served hot and fresh. 

And you, the developer, are the culinary artist! DIS hands you a robust toolkit of transformation parameters, giving you pixel-level control. Want to resize? scaleWidth or scaleHeight are your pals. Need to snip out a specific detail? cropX, cropY, cropWidth, and cropHeight are your precision scissors (remember, you need all four for the magic to happen!). Fancy a different file type? format lets you switch between gif, jp2, jpg, jpeg, jxr, and png from a smorgasbord of source formats, including tif and tiff

Ever wanted to add a “SALE!” image badge to an image without using Photoshop?  imageX, imageY, and imageURI are your go-to options for the overlay. Though honestly, why not just use CSS for this, right?

And for that perfect balance between crispness and speed, quality lets you fine-tune compression for JPG(1-100, default 80) and PNGs. Even pesky transparent backgrounds can be tamed with bgcolor, and metadata stripped with strip

Want to know precisely how all of these things work? Have a look at the official documentation.

Why You Should Be Best Friends with DIS

A cartoon illustration of a developer shaking hands with a friendly, anthropomorphic cloud icon, as small, optimized images happily flow between them towards an e-commerce storefront. This symbolizes a strong, collaborative, and efficient relationship with the Dynamic Image Service.
Best Friends with DIS: Seamless Image Optimization

For developers navigating the Salesforce B2C Commerce Cloud universe, DIS isn’t just a nice-to-have; it’s a game-changer that simplifies your life and turbocharges your storefront.

Kiss Manual Image Management Goodbye: Seriously, who has time to create 10 different versions of the same product shot? With DIS, you upload one glorious, high-resolution image to Commerce Cloud, and DIS handles the rest, generating every size and format on demand. This means your creative and merchandising teams can focus on crafting stunning visuals, not on tedious, repetitive image grunt work. More creativity, less clicking!  

Speed Demon & Responsive Rockstar: In the e-commerce race, speed wins. DIS helps you cross the finish line first by serving up images that are just right for every scenario. No oversized behemoths slow down your product pages, and no pixelated thumbnails ruin your search results. This precision means faster page loads, which directly translates into happier customers, improved SEO, and ultimately, more conversions. Plus, DIS is your built-in responsive design partner, ensuring your storefront looks sharp and loads lightning-fast on any device, from desktops to smartphones. As I’ve discussed in my blog post, From Lag to Riches: A PWA Kit Developer’s Guide to Storefront Speed, performance is paramount. 

Flexibility That’ll Make You Giddy: Ever had a designer suddenly decide to change the entire product grid layout? From four items at 150×150 pixels to three at 250×250? Without DIS, that’s a full-blown panic attack. With DIS? You tweak a few parameters in your templates, and bam!—new layout, perfectly sized images, no re-processing, no re-uploading, no re-assigning. Do you need a new promotional banner with a custom image size for a flash sale? Generate it instantly! (Ok…Ok, I might be a bit too optimistic here, some foresight and extra editor fields in Page Designer are needed for use-cases like this.)

This adaptability is pure gold. And here’s the cherry on top: by using the official Script API for URL generation, your image URLs are future-proofed. Salesforce can change its internal plumbing all it wants; your code remains rock-solid, reducing technical debt and maintenance headaches. 

				
					URLUtils.imageURL( '/<static image path>/image.png', { scaleWidth: 100, format: 'jpg' } );
				
			

Caching Like a Boss (and CDN’s Best Friend): DIS isn’t just dynamic; it’s smart. It caches (limited) transformations to deliver images at warp speed. If your Commerce Cloud instance is hooked up to a Content Delivery Network (newsflash: it is -> the eCDN), the CDN helps optimise caching as well (through TTL headers). 

When you update an image, there’s no need for manual cache invalidation thanks to a technique known as URL fingerprinting/asset fingerprinting. Instead of just replacing the old file, the platform creates a new URL for the updated image, often by adding a unique identifier (a “fingerprint”). Because the URL has changed, it forces browsers and the eCDN to download the new version as if it were a completely new file, bypassing the old cached version.

				
					/dw/image/v2/BCQR_PRD/on/demandware.static/-/Sites-master/default/dw515e574c/4.jpg
				
			

Do you notice that dw515e574c? It represents the unique “cache” ID managed by SFCC to ensure cached images are served. When the image updates, a new ID is generated so the customer always sees the latest version!

DIS Tips, Tricks, and How to Avoid Digital Disasters

To truly master DIS and avoid any “why isn’t this working?!” moments, keep these developer commandments in mind.

Embrace the Script API (Seriously, Do It!)

We can’t stress this enough: use the URLUtils and MediaFile Script API classes for generating your DIS URLs. 

It’s the official, validated, and future-proof way to do it. Here’s a little snippet to get you started:   

 
				
					var product = // obtain your product object
var thumbnailImage = product.getImage('thumbnail', 0);

if (thumbnailImage) {
    var imageUrl = thumbnailImage.getImageURL({
        scaleWidth: 100,
        format: 'jpg',
        quality: 85
    });
    // The 'imageUrl' variable now holds the dynamically generated URL
}
				
			

Know Your Image Limits (and How to Work Around Them)

Even superheroes have weaknesses. DIS has a few, and knowing them is half the battle:

  • Source Image Quality: Always upload the largest, most beautiful, and highest-quality images you’ve. DIS is a master at shrinking and optimising, but it can’t create pixels out of thin air (It’s not an AI solution)!  

  • Size Matters (A Lot): This is a big one. Images over 6MB in file size or larger than 3000×3000 pixels? DIS will politely decline to transform them and serve them up in their original, unoptimized glory. The first time you request an oversized image, you may encounter an error; however, subsequent requests typically proceed without issue. The takeaway? Keep your source images just under these limits (think 5.9MB or 2999×2999 pixels) to ensure DIS always works its magic.

    NoteOne source states a 10MB limit in the documentation, but to be cautious, always follow the 6MB limit.

  • Transformation Timeout: DIS has a 29-second deadline. If a transformation is super complex (especially on animated GIFs, where every frame needs processing), it might time out, giving you a dreaded 408 error. If you hit this, simplify your transformations or pre-process those extra-fancy assets. 

  • Cropping’s Four Musketeers: If you’re cropping, remember cropX, cropY, cropWidth, and cropHeight are a package deal. All four must be present, or no cropping happens!   

Transform DIS PNG to JPG

When it comes to image formats, transforming PNG files to JPEG using the SFCC Dynamic Image Service can be a game-changer, especially when you don’t need those transparent backgrounds. This simple trick alone can significantly reduce file sizes, leading to faster page loads and a smoother user experience.

Here’s how you might implement this in a controller:

				
					'use strict';

var server = require('server');
var ProductMgr = require('dw/catalog/ProductMgr');
var URLUtils = require('dw/web/URLUtils');

/**
 * @name Product-ImageExample
 * @function
 * @memberof Product
 * @description A controller endpoint that demonstrates the correct way to generate
 * a transformed image URL for a given product.
 */
server.get('ProductImageExample', function (req, res, next) {
    // 1. Retrieve the product object using the Product Manager.
    // The product ID should be passed as a query string parameter, e.g.,?pid=12345
    var product = ProductMgr.getProduct(req.querystring.pid);
    var imageUrl = ''; // Initialize a default empty URL.

    // 2. Check if the product and its image exist before proceeding.
    if (product) {
        // 3. Get the MediaFile object for the 'large' view type.
        var productImage = product.getImage('large', 0);

        if (productImage) {
            // 4. Generate the transformed URL using getImageURL() on the MediaFile object.
            // Here, we convert a PNG to a JPG and specify a white background.
            imageUrl = productImage.getImageURL({
                'format': 'jpg',
                'bgcolor': 'ffffff' // Use a 6-digit hex code for the color.
            }).toString(); // Convert the URL object to a string for the template.
        }
    }
    
    // 5. Render a template, passing the generated URL to be used in an <img> tag.
    res.render('product/productimage', {
        productImageURL: imageUrl
    });

    // It is standard practice to call next() at the end of a middleware function.
    next();
});

// Export the controller module.
module.exports = server.exports();
				
			

General Image Zen for Speed and Quality

DIS is powerful, but don’t forget the fundamentals of image optimisation:

  • Responsive Images (srcset & sizes): These attributes are your best friends for letting browsers pick the perfect image resolution for a user’s device and viewport. Less data, faster loads! 

  • Prevent Layout Jumps (CLS): Always specify the width and height attributes for your images. This reserves space, preventing annoying layout shifts that make your site feel janky and hurt your Core Web Vitals.   

  • Pre-Compress (Gently): While DIS handles quality, a little pre-compression on your source images (especially removing unnecessary metadata) can reduce file size by up to 30% without compromising visual quality. 

  • Leverage the CDN: DIS already plays nicely with Salesforce’s Content Delivery Network. This means your images are cached and delivered from servers closer to your global audience, making them appear almost instantly.  

Troubleshooting: When Things Go Sideways

  • “My image isn’t transforming!” First suspect: file size or dimensions. Check those 6MB and 3000×3000 pixel limits. 

  • “408 Timeout Error!” If you’re seeing this, especially with animated GIFs or huge images that undergo numerous transformations, you’re approaching the 29-second limit. Simplify or pre-process.  

  • General Sluggishness: Remember, images are just one piece of the performance puzzle. If your storefront is still slow, look for other potential culprits, such as poorly optimised custom code, complex reports, or inefficient API calls. Regular code audits are your friend!  

When Not to Use It (Or When to Be Extra Careful)

A cartoon illustration depicting a massive traffic jam of oversized, unoptimized images attempting to enter a cloud icon, which appears overwhelmed and unable to process the volume. The images are backed up on a road leading to the cloud, symbolizing a system bottleneck or overload.
Image Overload: When Your Service Gets Jammed

While DIS is a superhero, even superheroes have their kryptonite. There are a few scenarios where DIS might not be your go-to, or where you need to tread with extra caution:

  • When Your Images Are Absolute Giants: Remember those 6MB file sizes and 3000×3000 pixel dimension limits? If your source images consistently blow past these thresholds, DIS won’t transform them. Instead, they’ll be served in their original, unoptimized glory. This results in slower load times and a subpar user experience, particularly on mobile devices. For truly massive, high-fidelity assets (think ultra-high-res hero banners or interactive 360-degree product views that require large file sizes), you may need to consider specialised external image services or alternative hosting solutions that can handle and optimise such large files, or simply serve the original if the performance impact is minimal.

  • For Super Complex, “Expensive” Transformations: DIS has a 29-second timeout for transformations. If you’re trying to perform multiple, intricate operations on a very large image, or especially on animated GIFs (where every single frame needs processing), you may encounter this wall and receive a 408 timeout error. If your use case demands such complex, real-time transformations, you might need to pre-process these assets offline or explore dedicated, more powerful image processing platforms designed for extreme computational demands.

  • When Images Aren’t Hosted on Commerce Cloud Digital: DIS only works its magic on images that are stored within your Commerce Cloud Digital environment. If your images are hosted externally (e.g., on a third-party DAM or a different CDN not integrated with Commerce Cloud’s asset management), DIS won’t be able to touch them. In such cases, you’d rely on the capabilities of your external hosting solution for image optimisation.

  • For Very Simple, Static Images with No Transformation Needs: If you have a tiny, static icon or a simple logo that never changes size, format, or quality, and you don’t anticipate any future dynamic needs, the overhead of routing it through DIS might be overkill. While DIS is designed for flexibility, for truly unchanging, small, and already optimised assets, direct static hosting might be marginally simpler, though the benefits of DIS’s caching and CDN integration often outweigh this. However, given the “future-proofing” aspect, it’s generally still a good idea to use DIS for consistency.

  • You Need More Modern Features: If you’ve been in the SFCC space for some time, you’ve likely noticed that little has changed regarding image resizing and format support over the years, although formats like WebP are managed by the eCDN. For those seeking the newest formats like AVIF, you’ll need to look elsewhere at this time.

    Note: The WebP transformation is handled by the eCDN, specifically through its configuration feature known as “the image Polish options,” rather than by the DIS.

A cartoon illustration showing a fork in the road. One path leads to a cloud labeled "Dynamic Image Service (DIS)," and the other, larger path, leads to icons representing a "Third Party CDN/DAM" and "Digital Asset Management System." A developer character is pointing towards the CDN/DAM path, indicating a choice for image management solutions.
Deciding between Salesforce's native DIS and external CDN/DAM solutions often comes down to specific project needs and existing infrastructure.

Is it still useful for PWA Kit projects? (Spoiler: YES, and here's why!)

Absolutely, unequivocally, 100% YES! DIS isn’t just relevant for PWA Kit projects; it’s arguably more crucial. Modern headless storefronts, like those built with PWA Kit, thrive on speed, flexibility, and that buttery-smooth, app-like user experience. 

Dynamic image transformation is practically a prerequisite for achieving that.

Page Designer's Best Friend & Product Image Powerhouse

DIS integrates beautifully with Page Designer within PWA Kit. Page Designer, for the uninitiated, is Business Manager’s visual editor, which allows marketers to build dynamic, responsive pages without writing a single line of code (well, at least once all the components are developed 😇). 

Where do your product images live? In Commerce Cloud, of course! Which means DIS is the star player for serving them up. Page Designer components can then tap into DIS to display product images, content assets, or any other visual element, ensuring they’re perfectly optimised for whatever device your customer is using.   

The DynamicImage Component: Your PWA Kit Sidekick

PWA Kit even has a dedicated DynamicImage component that makes integrating with DIS a breeze. This component is designed to handle image transformations by mapping an array of widths to the correct sizes and srcset attributes, simplifying responsive image strategies directly within your React components.  

				
					<DynamicImage
    src={`${heroImage.disBaseLink || heroImage.link}[?sw={width}&q=60]`}
    widths={{
        base: '100vw',
        lg: heroImageMaxWidth
    }}
    imageProps={{
        alt: heroImage.alt,
        loading: loadingStrategy
    }}
/>
				
			

The post Image-ine: Salesforce B2C Commerce Cloud DIS for Developers appeared first on The Rhino Inquisitor.

]]>
From Lag to Riches: A PWA Kit Developer’s Guide to Storefront Speed https://www.rhino-inquisitor.com/lag-to-riches-a-pwa-kit-developers-guide/ Mon, 23 Jun 2025 17:00:05 +0000 https://www.rhino-inquisitor.com/?p=12990 Let's be honest: a slow e-commerce site is a silent killer of sales. In the world of B2C Commerce, every millisecond is money. As a PWA Kit developer, you're on the front lines of a battle where the prize is customer loyalty and the cost of defeat is a lost shopping cart. Today's shoppers have zero patience for lag. They expect buttery-smooth, app-like experiences, and they'll bounce if you don't deliver.

The post From Lag to Riches: A PWA Kit Developer’s Guide to Storefront Speed appeared first on The Rhino Inquisitor.

]]>

Truth be told: a slow e-commerce site is a silent killer of sales. In the world of B2C Commerce, every millisecond is money. As a PWA Kit developer, you’re on the front lines of a battle where the prize is customer loyalty and the cost of defeat is a lost shopping cart. Today’s shoppers have zero patience for lag. They expect buttery-smooth, app-like experiences, and they’ll bounce if you don’t deliver.

The numbers don’t lie. A one-second delay can reduce conversions by as much as 7%. But flip that around: a tiny 0.1-second improvement can boost conversion rates by a whopping 8% and keep shoppers from abandoning their carts. When you consider that more than half of mobile users will leave a site that takes over three seconds to load, the mission is crystal clear: speed is everything.

So, how do we win this battle? We need the proper intel and the right weapons. That’s where Google’s Core Web Vitals (CWV) and the Chrome User Experience Report (CrUX) come in. These aren’t just abstract numbers; they’re a direct line into how real people experience your storefront. 

This post is your new playbook. We’re going to break down why every millisecond matters and provide you with an actionable roadmap for taking the Composable Storefront to the next level.

Step 1: Know Your Numbers - Getting Real with CrUX and Core Web Vitals

Before you can optimise anything, you need to understand what you’re measuring. Let’s demystify the data that both your users and Google care about, starting with the difference between what happens in a controlled lab and what happens in the messy real world.

Meet CrUX: Your Real-World Report Card

The Chrome User Experience Report (CrUX) is a massive public dataset from Google, packed with real-world metrics from actual Chrome users. It’s the official source for Google’s Web Vitals program and the ultimate ground truth for how your site performs for your visitors.

This data comes from Chrome users who have opted in to syncing their browsing history and have usage statistic reporting enabled, without a Sync passphrase. For your site to appear in the public dataset, it must be discoverable and have sufficient traffic to ensure that all data is anonymous and statistically significant.

Here are two things you absolutely must know about CrUX:

  1. It’s a 28-Day Rolling Average: CrUX data isn’t live. It’s a trailing 28-day snapshot of user experiences. This means when you push a brilliant performance fix, you won’t see its full impact on your CrUX scores for up to a month. It’s a marathon, not a sprint.
  2. It’s All About the 75th Percentile: To evaluate your site’s performance, CrUX focuses on the 75th percentile. This means that to achieve a “Good” score for a metric, at least 75% of your (hard navigation) pageviews must have an experience that meets the “Good” mark. This focuses on the majority experience while ignoring the wild outliers on terrible connections.

You can also slice and dice CrUX data, such as device type, providing a powerful lens into your specific audience’s experience.

Lab Coats vs. The Real World: Why Field Data is King

This is one of the most common points of confusion, so let’s clarify it.

Field Data (The “What”) is what we’ve been talking about—data from real users on their own devices and networks. It’s also known as Real User Monitoring (RUM), and CrUX is the largest public source of it. It captures the beautiful chaos of the real world: slow phones, spotty Wi-Fi, and everything in between. It tells you what is happening.

Lab Data (The “Why”) is what you get from a controlled test, like running Google Lighthouse. It simulates a specific device and network to provide you with a performance report. Lab data is your diagnostic tool. It helps you understand why you’re seeing the numbers in your field data.

Here’s the million-dollar takeaway: Google uses field data from CrUX for its page experience ranking signal, NOT your lab-based Lighthouse score.

Google wants to reward sites that are genuinely fast for real people, not just in a perfect lab setting. Your goal isn’t to achieve a 100% score on Lighthouse; your goal is to ensure at least 75% of your real users pass the Core Web Vitals thresholds.

Lighthouse is the tool that helps you get there.

The Big Three: LCP, INP, and CLS Explained

"A three-panel cartoon showing a website mascot experiencing performance issues. First, labeled 'Slow LCP', the mascot strains to lift a heavy image. Second, labeled 'High INP', the mascot is frozen in a block of ice, unresponsive to a user's click. Third, labeled 'High CLS', the mascot is knocked over by a falling ad block that displaces a button.
A visual guide to Core Web Vital problems: How poor LCP, INP, and CLS create a frustrating user experience.

Core Web Vitals are the metrics that matter most. They measure three key aspects of user experience: loading, interactivity, and visual stability.

Largest Contentful Paint (LCP): Are We There Yet?

  • What it is: LCP measures how long it takes for the largest image or block of text to appear on the screen. It’s an excellent proxy for when a user Feels like the page’s main content has loaded.
  • The goal is to achieve a “Good” result, which is defined as 2.5 seconds or less. “Poor” is over four seconds.
  • Why it Matters for E-commerce: A slow LCP means your customer is staring at a loading screen instead of your product. This initial frustration is a one-way ticket to a high bounce rate.

Interaction to Next Paint (INP): Did That Click Do Anything?

  • What it is: INP measures how responsive your page is to user input. It tracks the delay for all clicks, taps, and key presses during a visit and reports a single value representing the system’s overall responsiveness. A high INP is what users refer to as “janky” or “unresponsive.” It replaced First Input Delay (FID) in March 2024 because it’s a much better measure of the entire user journey.
  • The Goal: “Good” is 200 milliseconds or less. “Poor” is over 500ms.
  • Why It Matters for E-commerce: High INP Kills Conversions. When a user clicks “Add to Cart” and nothing happens instantly, they lose trust and get frustrated. This leads to “rage clicks” and, ultimately, abandoned carts.

Cumulative Layout Shift (CLS): Stop Moving!

  • What it is: CLS measures how much your page’s content unexpectedly jumps around as it loads. It calculates a score based on how much things move and how far they move without the user doing anything 
  • The Goal: “Good” is a score of 0.1 or less. “Poor” is over 0.25.
  • Why it Matters for E-commerce: Have you ever tried to click a button, only to have an ad load and push it down, making you click the wrong thing? That’s high CLS. It’s infuriating for users and makes your site feel broken and untrustworthy.

Step 2: Understand Your Architecture - The PWA Kit's Double-Edged Sword

The Salesforce PWA Kit is engineered for speed, but its modern architecture creates two distinct performance battlegrounds. To win, you need to understand how to fight on both fronts.

The First Impression: Server-Side Rendering (SSR) to the Rescue

A vibrant, two-panel cartoon comparing web rendering methods. The top panel, 'Client-Side Rendering,' shows a stressed user buried in parts from a 'JavaScript Bundle' box. The bottom panel, 'Server-Side Rendering,' shows a happy user cheering as a heroic robot serves them a complete, glowing webpage on a platter.
From frustrating assembly to instant delight: The power of Server-Side Rendering.

When a user first lands on your site, the PWA Kit uses Server-Side Rendering (SSR) to make a great first impression. Here’s the play-by-play:

  1. A user requests a page.
  2. The request hits an Express.js app server running in the Salesforce Managed Runtime (MRT).
  3. On the server, your React components are rendered into a complete HTML document, with all necessary data fetched directly from the Commerce APIs.
  4. This fully baked HTML page is sent directly to the user’s browser.

The huge win here is for your Largest Contentful Paint (LCP). The browser gets a meaningful page instantly, instead of a blank screen and a giant JavaScript file it has to figure out.

The Managed Runtime then takes this to the next level. It has a built-in Content Delivery Network (CDN) that can cache these server-rendered pages. 
If another user requests the same page, the CDN can serve the cached version instantly, completely bypassing the server. A cached SSR response is the fastest you can get, leading to stellar LCP and Time to First Byte (TTFB) scores.

The Main Event: Hydration and Client-Side Interactivity

Once that initial HTML page loads, the magic of hydration happens. The client-side JavaScript bundle downloads, runs, and brings the static HTML to life by attaching all the event handlers and state.

From this moment on, your storefront is a Single-Page Application (SPA). All navigation and UI changes are handled by Client-Side Rendering (CSR). When a user clicks a link, JavaScript takes over, fetches new data, and renders only the parts of the page that need to change, all without a full page reload.

This is the “double-edged sword.” CSR provides that fluid, app-like feel, but it’s also where you’ll find the bottlenecks that hurt your Interaction to Next Paint (INP).

This creates a clear divide: LCP optimisation is a server-side game of efficient rendering and aggressive caching. INP optimisation is a client-side battle against bloated, inefficient JavaScript. 

You can have a fantastic LCP score but still have a terrible user experience due to high INP from clunky client-side code. PWA Kit projects are powerful React apps, and they can get JavaScript-heavy if you’re not careful. And the built-in libraries used, such as Chakra, are not making it easy for you to win this battle.

You need to wear two hats: a backend/DevOps hat for the initial load, and a frontend performance specialist hat for everything after.

The Usual Suspects: Common PWA Kit Performance Bottlenecks

Every PWA Kit developer will eventually face these common performance villains. Here’s your wanted list:

  • Bloated JavaScript Bundles: The Retail React App template is excellent, but if you don’t manage it properly, your JS bundle can become huge. Every new feature adds weight, slowing down hydration and hurting INP.
  • Clumsy Data Fetching: Whether you’re using the old getProps or the new withReactQuery, you can still make mistakes. Fetching data sequentially instead of in parallel, grabbing significantly more data than needed, or re-fetching data on the client that the server has already provided are all common ways to slow down TTFB and LCP.
  • Unruly Third-Party Scripts: These are public enemy #1. Scripts for analytics, ads, A/B testing, and support chats can be performance nightmares. They block the main thread, tank your INP, and can even mess with your service worker caching.
  • Poorly Built Custom Components: A single custom React component that isn’t optimised for performance can significantly impact your INP. This typically occurs through expensive calculations on every render or by triggering a chain reaction of unnecessary re-renders in its children.
  • Messed-Up Caching: The MRT’s CDN is powerful, but it’s not magic. If you don’t set your Cache-Control headers correctly, fail to filter out unnecessary query parameters, or misconfigure your API proxies, you’ll experience a poor cache-hit ratio, and all the benefits of Server-Side Rendering (SSR) will be lost.
A colorful cartoon of a chaotic factory illustrating four web performance bottlenecks. The bottlenecks shown are: a giant truck labeled 'Large Bundle Size' blocking the entrance, many small pipes labeled 'Network Waterfalls' slowly filling a tank, a complex machine for a simple task labeled 'Re-render Storms', and workers slipping on puddles labeled 'Memory Leaks'.
Inside a struggling SPA: A visual guide to common performance bottlenecks.

Step 3: The Performance Playbook - Your Guide to a Faster Storefront

Now that you know the what and the why, let’s get to the how. Here are the specific, actionable plays you can run to build a high-performance storefront.

Master Your Data Fetching

How you fetch data is critical for a fast LCP and a snappy experience.

  • Use withReactQuery for New Projects: If you’re on PWA Kit v3+, withReactQuery is your best friend. It utilises the popular React Query library to make fetching data on both the server and client a breeze. It smartly avoids re-fetching data on the client that the server has already retrieved, which means cleaner code and improved performance.
  • Optimise getProps for Legacy Projects: Stuck on an older project? No problem. Optimise your getProps calls:
    • Be a Minimalist: Return only the exact data your component needs for its initial render. Don’t send the whole API response object. This keeps your HTML payload small.
    • Go Parallel: If a page needs data from multiple APIs, use Promise.all to fire off those requests at the same time. This is way faster than waiting for them one by one.
    • Handle Errors with Finesse: For critical errors (such as a product not found), throw an HTTPError to display a proper error page. For non-critical stuff, pass an error flag in props so the component can handle it without crashing.
  • Fetch Non-Essential Data on the Client: Anything that’s not needed for the initial, above-the-fold view (such as reviews or related products) should be fetched on the client side within an useEffect hook. This enables your initial page to load faster, improving TTFB and LCP.

Whip Your JavaScript and Components into Shape

Your client-side React code is the most significant factor for INP. Time to optimise.

  • Split Your Code: The PWA Kit is already set up for code splitting with Webpack, so use it! Load your page-level components dynamically with the loadable utility. This means the code for the product detail page is only downloaded when a user visits it, thereby shrinking your initial bundle size.
  • Lazy Load Below-the-Fold: For heavy components that are “below the fold” or in modals, use lazy loading.
  • Stop Wasting Renders: Unnecessary re-renders are a top cause of poor INP. Use React’s memoisation hooks like a pro:
    • React.memo: Wrap components in React.memo to stop them from re-rendering if their props haven’t changed. Perfect for simple, presentational components.
    • useCallback: When you pass functions as props to memoised children, wrap them in useCallback. This maintains the function’s reference stability, preventing the child from re-rendering unnecessarily.
    • useMemo: Use useMemo for expensive calculations. This caches the result so it’s not recalculated on every single render.
  • Be Smart with State: The Context API is great, but be careful. Any update to a context re-renders all components that use it. For complex states, break your contexts into smaller, logical pieces (like a UserContext and a CartContext) to keep re-renders contained.

Become a Caching Ninja with Managed Runtime

Getting your CDN cache hit ratio as high as possible is the single most effective way to boost LCP for most of your users.

  • Set Granular Cache-Control Headers:
    • Per-Page: Inside a page’s getProps function, you can set a custom cache time. A static “About Us” page can be cached for days (res.set(‘Cache-Control’, ‘public, max-age=86400’)), while a product page might be cached for 15-30 minutes.
    • Use stale-while-revalidate: This header is pure magic. Cache-Control: s-maxage=600, stale-while-revalidate=3600 tells the CDN to serve a cached version for 10 minutes. If a request comes in after that, it serves the stale content instantly (so the user gets a fast response) and then fetches a fresh version in the background. It’s the perfect balance of speed and freshness.
  • Build Cache-Friendly Components: To be cached, your server-rendered HTML needs to be generic for all users. Any personalised content (like the user’s name or cart count) must only be rendered on the client. A simple trick is to wrap it in a check:

    {typeof window!== ‘undefined’ && <MyPersonalizedComponent />}.

    This ensures it only renders in the browser.

  • Filter Useless Query Parameters: Marketing URLs often contain “unnecessary” parameters, such as gclid and utm_tags, which make every URL unique and prevent your cache from being effective. Edit the processRequest function in app/request-processor.js to strip these parameters before checking the cache. This allows thousands of different URLs to access the same cached page.
  • Cache Your APIs: By default, proxied requests aren’t cached by the CDN. This setting lets you use proxy requests in your code without worrying about accidentally caching responses. If you want a proxied request to be cached by the CDN, simply change the path prefix from proxy to caching.

Tame the Third-Party Script Beast

A cartoon developer is taming a large 'beast' made of code and tangled wires. The developer is putting a collar labeled 'async' and holding a leash labeled 'defer' on the beast, while corralling other parts of it towards a pen labeled 'Lazy Load Zone'.
Taming the Third-Party Script Beast: A visual guide to managing external scripts for better web performance.

Third-party scripts are performance killers. You need to control them.

  • Audit and Justify: Open Chrome DevTools and look at the Network panel. Make a list of every third-party script. For each one, ask: “Do we need this?” If the value doesn’t outweigh the performance cost, eliminate it.
  • Load Asynchronously: Never, ever load a third-party script synchronously. Always use the async or defer attribute. Async lets it download without blocking the page, and defer makes sure it only runs after the page has finished parsing.
  • Lazy Load Widgets: For things like chat widgets or social media buttons, don’t load them initially. Use JavaScript to load the script only when the user scrolls near it or clicks a placeholder.
  • Use a Consent Management Platform (CMP): A good CMP integrated with Google Tag Manager (GTM) is a must-have. It stops marketing and ad tags from loading until the user gives consent. This is great for both privacy and performance.
  • Check Your Service Worker: Your PWA’s service worker might block requests to domains that aren’t on its whitelist. When adding a new third-party script, ensure its domain is configured correctly in your service worker to prevent blocking.

Create Bulletproof, Lightning-Fast Images

Images are usually the heaviest part of a page. Optimising them is non-negotiable.

  • Serve Responsive Images: Use the <picture> element or srcset and sizes attributes on your <img> tags. This allows the browser to select the perfect-sized image for the device, so a phone doesn’t have to download a massive desktop image.
  • Use Modern Formats: Use WebP format for images whenever possible. It provides significantly better compression than JPEG or PNG, often cutting file size by 25-35% without noticeable quality loss. Cloudflare only supports WebP. If you use a third-party image provider, check what’s available, as there are now more modern options, including AVIF.
  • Compress, Compress, Compress: Use an image optimisation service or build tools to compress your images. A JPEG quality of 85 is usually a great sweet spot.
  • Prevent Layout Shift with Dimensions: This is a super-easy and effective fix for CLS. Always add width and height attributes to your <img> and <video> tags. This allows the browser to reserve the right amount of space before the media loads, preventing the annoying content jump.
  • Lazy Load Offscreen Images: For any image that’s not in the initial viewport, add the native loading=”lazy” attribute. This instructs the browser to delay loading those images until the user scrolls down to them, which significantly improves performance.

Step 4: Make it a Habit - Monitoring, Debugging, and Continuous Improvement

Performance isn’t a one-and-done task. It’s a discipline. You need a solid workflow to monitor, debug, and prevent problems from creeping back in.

Your Performance-Sleuthing Toolkit

You have a powerful set of free tools to become a performance detective.

  • PageSpeed Insights (PSI): This is your starting point. The top section, “Discover what your real users are experiencing,” is your CrUX field data—your final report card. The bottom section, “Diagnose performance issues,” is your lab data from Lighthouse. Use its “Opportunities” and “Diagnostics” to find the technical fixes you need to improve your field data scores.
  • Google Lighthouse: Running Lighthouse from Chrome DevTools provides the most detailed results. Dig into the recommendations to find render-blocking resources, massive network payloads, and unused JavaScript. The “Progressive Web App” audit is also crucial for making sure your service worker and manifest are set up correctly.
  • Chrome DevTools:
    • Performance Panel: This is your primary tool for identifying INP issues. Record a page load or interaction to get a “flame chart” of everything the main thread is doing. Look for long tasks (marked with a red triangle) to find the exact JavaScript functions that are causing lag.
    • Network Panel: Use this to inspect all network requests. Check your Cache-Control headers, analyse asset sizes, and use “Request blocking” to temporarily disable third-party scripts to see how much damage they’re doing.
    • Application Panel: This is your PWA command centre. Inspect your manifest, check your service worker’s status, clear caches, and simulate being offline to test your app’s reliability.

Symptom / Poor Metric

Likely PWA Kit Cause(s)

Recommended Diagnostic Tool(s)

Actionable Solution(s)

Poor LCP on Product Detail Page

1. Large, unoptimized hero image.

2. Slow, sequential API calls in getProps/useQuery during SSR.

3. Low CDN cache hit ratio.

1. PageSpeed Insights to identify the LCP element.

2. ?__server_timing=true to check ssr:fetch-strategies time.

3. MRT logs and CDN analytics.

1. Compress hero image, serve in WebP format, use srcset.

2. Refactor data fetching to use Promise.all or a single aggregated API call.

3. Set longer Cache-Control headers.

Poor INP on Product Listing Page

1. Long JavaScript task during client-side hydration.

2. Excessive re-renders when applying filters.

3. A blocking third-party analytics script.

1. DevTools Performance Panel to identify long tasks.

2. React DevTools Profiler to visualize component renders.

3. DevTools Network Panel to block the script and re-test.

1. Code-split the PLP’s JavaScript.

2. Use React.memo, useCallback, and useMemo on filter components.

3. Defer or lazy-load the third-party script.

High CLS on Homepage

1. Images loading without width and height attributes.

2. A cookie consent banner or ad injected dynamically.

3. Web fonts causing a flash of unstyled text (FOUT).

1. Lighthouse audit to identify elements causing shifts.

2. DevTools Performance Panel with “Screenshots” enabled to see the shifts happen.

1. Add explicit width and height to all <img> tags.

2. Reserve space for the banner/ad with CSS.

3. Preload key fonts using <link rel=”preload”>.

PWA Kit-Specific Debugging Tricks

The PWA Kit has some built-in secret weapons for debugging.

  • The __server_timing Parameter: Add ?__server_timing=true to any URL in your dev environment. You’ll get a Server-Timing header in the response that breaks down exactly how long each part of the SSR process took. It’s perfect for figuring out if a slow response is because of a slow API or a heavy React component.
  • The ?__server_only Parameter: Use this parameter to see the pure, server-rendered version of a page without any client-side JavaScript. It’s great for seeing what search engines see and for spotting layout shifts between the server and client versions.
  • Managed Runtime Log Center: In production, the Log Center is your go-to for troubleshooting. You can search and filter logs from your app server to diagnose server-side errors and performance issues that only show up in the wild.

Wrapping Up: Your Journey to a High-Performance Storefront

Building a blazingly fast storefront with the Salesforce PWA Kit isn’t about finding one magic trick. It’s a discipline. It requires understanding what users care about, knowing your architecture’s strengths and weaknesses, and committing to a cycle of measuring, optimising, and monitoring.

In the cutthroat world of B2C commerce, that’s not just a nice-to-have—it’s the ultimate competitive advantage that drives real revenue.

The post From Lag to Riches: A PWA Kit Developer’s Guide to Storefront Speed appeared first on The Rhino Inquisitor.

]]>
Mastering Sitemaps in Salesforce B2C Commerce: A Developer’s Guide https://www.rhino-inquisitor.com/mastering-sitemaps-in-sfcc/ Mon, 16 Jun 2025 07:30:19 +0000 https://www.rhino-inquisitor.com/?p=12947 In Salesforce B2C Commerce Cloud (SFCC), the sitemap is more than just a list of links. It's a powerful, scalable system for telling search engines exactly what's on your site and how important it is. Getting it right means faster indexing, better visibility, and a happier marketing team. Getting it wrong can leave your brand-new products invisible to Google.

The post Mastering Sitemaps in Salesforce B2C Commerce: A Developer’s Guide appeared first on The Rhino Inquisitor.

]]>

Let’s be honest, as developers, “SEO” can sometimes feel like a four-letter word handed down from the marketing team. But what if I told you that one of the most critical SEO tools, the sitemap, is actually a fascinating piece of platform architecture you can control, automate, and even extend with code?

In Salesforce B2C Commerce Cloud (SFCC), the sitemap serves a purpose beyond simply listing links. It is a robust and scalable system that communicates to search engines precisely what content exists on your site and its level of importance. Properly configuring your sitemap results in faster indexing, improved visibility, and greater satisfaction for your marketing team. If done incorrectly, however, it could render your newly launched products invisible to search engines like Google, as well as tools such as ChatGPT and Google Gemini.

This guide will walk you through everything you need to know, from leveraging the powerful out-of-the-box tools to writing custom integrations and mastering sitemaps in a headless PWA Kit world.

The "Easy Button": The Built-in Sitemap Generator

Think of the standard SFCC sitemap generator as the “easy button” that handles 80% of the work for you. For massive e-commerce sites with millions of URLs, this is a lifesaver.

At its core, the platform cleverly sidesteps search engine limitations, like the 50,000 URL or 10MB file size cap, by creating a two-tiered system. It generates a main sitemap_index.xml file, which is the only URL you need to give to Google. This index file then points to a series of child sitemaps (sitemap_0.xml, sitemap_1.xml, etc.) that contain the actual URLs.

You control all of this from the Business Manager: Merchant Tools > SEO > Sitemaps.

Your Control Panel: The Settings Tab

A screenshot of the Sitemap Settings in Merchant tools in Salesforce B2C Commerce Cloud
The Sitemap Settings in the Business Manager

The Settings tab is your main control panel. Here’s what you, as a developer, need to care about 4:

  • Content Inclusion: You can choose exactly what gets included: products, categories, content assets, and even product images.
  • Priority & Change Frequency: These settings are direct hints to search engine crawlers. Priority (a scale of 0.1 to 1.0) suggests a URL’s importance relative to other pages on your site. Change Frequency (from always to never) suggests how often a page’s content is updated.
  • Product Rules: You can get granular, choosing to include only available products, available and orderable products, or all products. This directly ties into your inventory and data strategy.
  • hreflang for Multi-Locale Sites (Alternate URLs): If you manage a site with multiple languages or regions, enabling Include alternate URLs (hreflang) is a huge win. It automatically adds the necessary tags to tell search engines about the different versions of a page, a task that can be a manual pain on other platforms.

The Golden Rule of Scheduling

Screenshot of the "Job" tab in Sitemap configuration in the Business Manager
The Job tab

You can run the sitemap generation manually or, more practically, schedule it as a recurring job from the Job tab. Here is the single most important operational detail:

Always schedule the sitemap job to run after your daily data replication from the staging instance. 

If you run it before, all the new products and content from that day’s replication will be missing from the sitemap, rendering it stale the moment it’s created.

Going Custom: When the Built-in Isn't Enough

The Custom tab in the Sitemap settings in the Business Manager
The "Custom Sitemaps" tab

What happens when you have content that doesn’t live in SFCC? Maybe you have a WordPress blog, an external reviews provider, or a separate forum. You need to get those URLs into your site’s sitemap index. SFCC gives you two powerful paths to do this.

Path 1: The Classic Job Step (The Batch Approach)

The traditional method involves building a custom job step within a cartridge. This is ideal for batch-oriented processes, such as pulling a sitemap file from an SFTP server on a nightly basis.

Your script would use the dw.sitemap.SitemapMgr script API. The key method is SitemapMgr.addCustomSitemapFile(hostName, file), which takes a file your script has fetched and places it in the correct directory to be picked up by the main sitemap generation job. This requires some classic SFCC development: writing the script and defining the job step in a steptypes.json or steptypes.xml file.

Path 2: The Modern SCAPI Endpoint (The Real-Time Approach)

For a more modern, API-first architecture, consider using the Salesforce Commerce API (SCAPI). The Shopper SEO API provides the uploadCustomSitemapAndTriggerSitemapGeneration endpoint.

This is a PUT request that enables a trusted external system to upload a custom sitemap file directly to SFCC and initiate the generation process asynchronously. This is the ideal solution for event-driven systems. For example, a headless CMS could use a webhook to call this endpoint the instant a new article is published, getting that URL into the sitemap almost immediately.

Choices... choices

Integration Method

Best For

Mechanism

Vibe

Manual Upload

One-offs, testing

UI in Business Manager 

Quick & Dirty

Script API Job

Batch processes (e.g., nightly sync)

Custom job step using dw.sitemap.SitemapMgr

Classic & Reliable

SCAPI Endpoint

Real-time, event-driven integrations

PUT request to the Shopper SEO API

Modern & Agile

Sitemaps in the Headless Universe: PWA Kit Edition

Going headless with the Composable Storefront (PWA Kit) changes the game, but the sitemap strategy remains firmly rooted in the backend—and for good reason. The SFCC backend is the system of record for the entire product catalog. 

Forcing the PWA Kit frontend to generate the sitemap would require an API call nightmare to fetch all that data.

Instead, you use the backend’s power and bridge the gap.

The Standard Headless Playbook

  1. Configure the Hostname Alias: This is the most critical step. In Business Manager (Merchant Tools > SEO > Aliases), you must create an alias that exactly matches your PWA Kit’s live domain (e.g., www.your-pwa.com). This ensures the backend generates URLs with the correct domain.
  2. Generate in Business Manager: Use the standard job you’ve already configured.
  3. Update robots.txt: In your PWA Kit project’s code, add the Sitemap directive to your robots.txt file, pointing to the full URL of the sitemap index (e.g., Sitemap: https://www.your-pwa.com/sitemap_index.xml).
  4. Proxy the Request: Your PWA Kit app needs to handle requests for the sitemap. You can add a rule to your server-side rendering logic (often in app/ssr.js) to proxy requests for /sitemap_index.xml and its children to the SFCC backend where the files actually live. Or use the eCDN for this job!

The Hybrid Approach for PWA-Only Routes

But what about pages that only exist in your PWA? Think of custom React-based landing pages or an “About Us” page that isn’t a content asset in SFCC. The backend generator has no idea these exist.

The solution is an elegant hybrid approach that you can automate in your CI/CD pipeline:

  1. Backend Generates Core Sitemap: The scheduled job on SFCC runs as normal, creating sitemaps for all products, categories, and content assets.

  2. Frontend Generates Custom Sitemap: As a build step in your CI/CD pipeline, run a script that scans your PWA Kit’s routes and generates a small, separate sitemap file (e.g., pwa-custom.xml) containing only these frontend-specific URLs.

  3. Automate the Merge: The final step of your deployment script makes a PUT request to the uploadCustomSitemapAndTriggerSitemapGeneration SCAPI endpoint, uploading the pwa-custom.xml file. This tells SFCC to regenerate the main index, adding a link to your new custom file.  

This strategy uses the right tool for the job: the backend’s efficiency for the massive catalog and the frontend’s build process to handle its own unique pages.

Conclusion

By mastering these tools and strategies, you can transform sitemap management from a chore into a powerful, automated part of your development and deployment workflow. You’ll build more robust sites, ensure content is discovered faster, and become an SEO hero in the process.

The post Mastering Sitemaps in Salesforce B2C Commerce: A Developer’s Guide appeared first on The Rhino Inquisitor.

]]>
Local vs Shared Variation Attributes in Commerce Cloud https://www.rhino-inquisitor.com/local-vs-shared-variation-attributes-sfcc/ Mon, 14 Apr 2025 07:17:18 +0000 https://www.rhino-inquisitor.com/?p=12891 In the dynamic world of eCommerce, the concept of product variation holds significant importance. It empowers merchants to effectively present a range of product options, a crucial aspect for platforms like Salesforce B2C Commerce Cloud. These platforms often deal with extensive catalogs, each with a variety of attributes to cater to diverse customer preferences. Among […]

The post Local vs Shared Variation Attributes in Commerce Cloud appeared first on The Rhino Inquisitor.

]]>

In the dynamic world of eCommerce, the concept of product variation holds significant importance. It empowers merchants to effectively present a range of product options, a crucial aspect for platforms like Salesforce B2C Commerce Cloud. These platforms often deal with extensive catalogs, each with a variety of attributes to cater to diverse customer preferences.

Among the ways to handle product variations are local and shared variation attributes. In this article, we will delve into the technical differences between these two types of attributes, exploring their definitions, implementations in catalog import XML, and their respective advantages and disadvantages.

What Are Variation Attributes?

Let’s start by understanding what variation attributes are. These are the unique characteristics that define the different options for a specific product. For instance, a t-shirt available in multiple colors and sizes has ‘color’ and ‘size’ as its variation attributes, allowing customers to select their preferred options.

In Salesforce B2C Commerce Cloud, variation attributes play a pivotal role. They not only help in categorizing products but also significantly enhance the shopping experience by making product selection easier and more intuitive for customers.

In terms of catalog import XML, variation attributes are represented in a structured format that ensures the system understands the attributes associated with each product.

What Are Local Variation Attributes?

Local variation attributes are specific to a single product or a small group of products within a catalog. These attributes apply only to the respective products that define them, which means they can vary significantly from one product to another. Local attributes are particularly useful when there is a need to cater to unique product offerings that don’t apply to the broader catalog.

A screenshot of a product in the business manager showing the Variations tab with local variation attributes.

Implementation in Catalog Import XML

In the catalog import XML, local variation attributes are defined under the specific product they are associated with, which distinguishes them from shared attributes. The XML snippet below illustrates how local variation attributes are structured:

				
					<?xml version="1.0" encoding="UTF-8"?>
<catalog
	xmlns="http://www.demandware.com/xml/impex/catalog/2006-10-31" catalog-id="apparel-m-catalog">
	<product product-id="25113204M">
  	...
        
		<variations>
			<attributes>
				<variation-attribute attribute-id="color" variation-attribute-id="color">
					<display-name xml:lang="x-default">Color</display-name>
					<variation-attribute-values>
						<variation-attribute-value value="JJ2KCXX">
							<display-value xml:lang="x-default">Gulf</display-value>
						</variation-attribute-value>
						<variation-attribute-value value="JJG28XX">
							<display-value xml:lang="x-default">Pink</display-value>
						</variation-attribute-value>
						<variation-attribute-value value="JJI15XX">
							<display-value xml:lang="x-default">White</display-value>
						</variation-attribute-value>
					</variation-attribute-values>
				</variation-attribute>
				<variation-attribute attribute-id="size" variation-attribute-id="size">
					<display-name xml:lang="x-default">Size</display-name>
					<variation-attribute-values>
						<variation-attribute-value value="004">
							<display-value xml:lang="x-default">4</display-value>
						</variation-attribute-value>
						<variation-attribute-value value="006">
							<display-value xml:lang="x-default">6</display-value>
						</variation-attribute-value>
						<variation-attribute-value value="008">
							<display-value xml:lang="x-default">8</display-value>
						</variation-attribute-value>
					</variation-attribute-values>
				</variation-attribute>
			</attributes>
			<variants>
				<variant product-id="008885004885M"/>
                ...
            
			</variants>
		</variations>
        ...
    
	</product>
	<product product-id="008885004885M">
		<custom-attributes>
			<custom-attribute attribute-id="color">JJI15XX</custom-attribute>
			<custom-attribute attribute-id="refinementColor">white</custom-attribute>
			<custom-attribute attribute-id="size">006</custom-attribute>
			<custom-attribute attribute-id="width">Z</custom-attribute>
		</custom-attributes>
	</product>
</catalog>
				
			

In this example, the main product defines a color and size variation attribute that only applies to this particular main product and its variants.

What Are Shared Variation Attributes?

On the other hand, shared variation attributes are those that can be applied across multiple products within the catalog. These attributes promote consistency and can streamline the management of products that share similar characteristics. For instance, if multiple shoes come in the same colors and sizes, having shared variation attributes simplifies catalog management.

A screenshot of the Business Manager showing where to configure Shared Varaiation Attributes: Products and Catalogs > Shared Variation Attributes - Select Catalog
Merchant Tools > Products and Catalogs > Shared Variation Attributes

Implementation in Catalog Import XML

Shared variation attributes in the catalog import XML are referenced as part of the catalog, rather than an individual product. The following XML example showcases how shared variation attributes are represented:

				
					<catalog
	xmlns="http://www.demandware.com/xml/impex/catalog/2006-10-31" catalog-id="apparel-m-catalog">
	<variation-attribute attribute-id="color" variation-attribute-id="color">
		<display-name xml:lang="x-default">Kleur</display-name>
		<variation-attribute-values>
			<variation-attribute-value value="black">
				<display-value xml:lang="x-default">Black</display-value>
			</variation-attribute-value>
		</variation-attribute-values>
	</variation-attribute>
	<variation-attribute attribute-id="size" variation-attribute-id="size">
		<display-name xml:lang="x-default">Size</display-name>
		<variation-attribute-values>
			<variation-attribute-value value="16">
				<display-value xml:lang="x-default">16</display-value>
				<description xml:lang="x-default">16</description>
			</variation-attribute-value>
			<variation-attribute-value value="17">
				<display-value xml:lang="x-default">17</display-value>
				<description xml:lang="x-default">17</description>
			</variation-attribute-value>
			<variation-attribute-value value="18">
				<display-value xml:lang="x-default">18</display-value>
				<description xml:lang="x-default">18</description>
			</variation-attribute-value>
		</variation-attribute-values>
	</variation-attribute>
	...
	 
	<product product-id="main">
        ....
        
		<variations>
			<attributes>
				<shared-variation-attribute attribute-id="color" variation-attribute-id="color"/>
				<shared-variation-attribute attribute-id="size" variation-attribute-id="size"/>
			</attributes>
			<variants>
				<variant product-id="123456"/>
				<variant product-id="7891011"/>
			</variants>
			<variation-groups>
				<variation-group product-id="123"/>
				<variation-group product-id="456"/>
			</variation-groups>
		</variations>
	</product>
	<product product-id="main-2">
        ....
        
		<variations>
			<attributes>
				<shared-variation-attribute attribute-id="color" variation-attribute-id="color"/>
				<shared-variation-attribute attribute-id="size" variation-attribute-id="size"/>
			</attributes>
			<variants>
				<variant product-id="223456"/>
				<variant product-id="8891011"/>
			</variants>
			<variation-groups>
				<variation-group product-id="223"/>
				<variation-group product-id="556"/>
			</variation-groups>
		</variations>
	</product>
	<product product-id="123456">
		<custom-attributes>
			<custom-attribute attribute-id="color">black</custom-attribute>
			<custom-attribute attribute-id="size">18</custom-attribute>
		</custom-attributes>
	</product>
</catalog>
				
			

In this case, both products utilise the same shared attributes for “Color,” demonstrating the shared nature of these attributes.

Pros and Cons of Local Variation Attributes

Pros

  1. Specificity: Local variation attributes allow for highly tailored product offerings, accommodating unique features that don’t apply to other products.
  2. Flexibility: Merchants have the flexibility to quickly change or update the attributes for individual products without affecting others.
  3. Simplicity: For products that have very distinctive attributes, local variation attributes can simplify the overview for customers by streamlining options.

Cons

  1. Management Complexity: Having numerous local attributes can lead to a complex catalog management system, making it harder to maintain and update individual products.
  2. Redundancy: Local variation attributes, when overused, can lead to redundancy, especially if multiple products share similar attributes.
  3. Limited Scalability: As the catalog grows, managing local attributes can become increasingly cumbersome, limiting long-term scalability.
  4. Import XML Size: The import file exponentially grows over time, slowing down the overall import process.

Pros and Cons of Shared Variation Attributes

Pros

  1. Consistency: Shared attributes ensure consistency across products, creating a more streamlined shopping experience for customers.
  2. Ease of Management: Managing shared attributes can be less complex since changes made to these attributes automatically apply to all associated products.
  3. Scalability: Shared attributes offer a scalable approach; as new products are added, they can easily adopt existing attributes without requiring extensive modifications.
  4. Import performance: A smaller XML to import, with less duplication, means faster import speeds.

Cons

  1. Lack of Flexibility: The main drawback of shared variation attributes is the potential lack of flexibility when unique attributes are necessary for specific products.
  2. Limiting Options: Merchants may find it challenging to personalise products since shared attributes can constrain variation options.
  3. Over-generalisation: There is a risk of over-generalising product attributes, which may lead to a lackluster shopping experience for customers seeking specificity.

Combination?

In certain scenarios, combine approaches by utilising shared attributes for most of your product catalog while relying on localised variation attributes for a smaller portion!

Conclusion

Local and shared variation attributes play essential roles in product management within Salesforce B2C Commerce Cloud. Each has its set of advantages and potential downsides, and the choice between them often depends on the business’s specific requirements, the nature of the products being offered, and the desired customer experience.

Understanding the nuances of local and shared attributes is not only important but paramount for any organisation aiming to leverage the full potential of its B2C Commerce Cloud solution.

The post Local vs Shared Variation Attributes in Commerce Cloud appeared first on The Rhino Inquisitor.

]]>
Sending Emails from Salesforce B2C Commerce Cloud: A Comprehensive Guide https://www.rhino-inquisitor.com/sending-emails-from-sfcc/ Mon, 09 Dec 2024 08:19:31 +0000 https://www.rhino-inquisitor.com/?p=12775 This article covers the reasons for opting to send emails via Salesforce Commerce Cloud, the platform's limitations, the steps for programmatically sending an email, testing email templates, configuring SPF records, and the possibility of using your own SMTP server.

The post Sending Emails from Salesforce B2C Commerce Cloud: A Comprehensive Guide appeared first on The Rhino Inquisitor.

]]>

Salesforce B2C Commerce Cloud is known as a monolithic system, providing a wide range of functionalities out of the box. One of the “smaller” features of the platform is its capability to send emails directly without needing a third-party service. 

In this article, we will discuss the reasons for choosing to send emails from Salesforce Commerce Cloud, the limitations of the platform, the steps to programmatically send an email, how to test email templates, the process of configuring SPF records, and whether you can utilise your own SMTP server.

Why Choose Salesforce B2C Commerce Cloud for Email?

In many past projects, I have opted to send emails from a Marketing Automation platform since it has many benefits compared to the built-in functionality. But if the items below do not affect you, SFCC can be a great platform to send your transactional emails (order confirmation, password reset, registration, etc.).

Marketing Opportunities

In most Marketing platforms, you have the freedom to be highly flexible with the templates, adapting them to the current time period and campaigns happening. In Salesforce B2C Commerce Cloud, a code deployment is necessary, and extra testing is required to ensure the built functionalities still work. This can increase the workload, making it less feasible to make multiple adaptations over the year.

While you may not want to modify your transactional emails frequently, it’s a lot easier to give them a nice ‘holiday’ or ‘easter’ styling for a few weeks in the year using dedicated marketing tools.

a mail across the year

Page Designer to the rescue?

With the addition of Page Designer a few years ago, we have gotten a more “visual” way of creating pages, compared to static ISML templates with slots and Content Assets.

But with choosing to go this route, a few things need to be kept in mind:

  • A separate “master” template is required without the regular header/footer
  • Only components that include HTML/CSS understood by mail clients should be used (modern HTML & CSS can cause issues in some mail clients).
  • Personalization (Customer Groups in particular) can pose difficulties when the mail originates from a job rather than a direct storefront request, as the session “current customer” is not readily accessible.
  • If a user inadvertently deletes the page (yes, accidents happen), there should be a backup option or a notification to resolve the issue promptly.

Only Basic Features

Salesforce’s email capabilities may not be as feature-rich as those of dedicated email marketing platforms. You might miss advanced features such as personalised templates, optimised sending times and tracking.

Deliverability Concerns

Ensuring high deliverability rates can be challenging because you’re relying on shared IP addresses, which may affect the reputation of your emails. We will return to this topic later in the article since the platform provides us with some solutions!

Cost!

The previous points may seem like I’m advocating for a dedicated email platform. However, the marketing automation features I mentioned come with costs!

Generally, each email sent comes with a charge, and these costs can accumulate significantly over time month.

Fortunately, Salesforce Commerce Cloud doesn’t require a separate license, allowing you to send unlimited emails at no additional cost!

How to Send an Email Programmatically

Emails can be sent from Salesforce B2C Commerce Cloud via code easily. Here’s a basic example using server-side scripts:

				
					function sendMail() {
    var mail = new dw.net.Mail();
    mail.addTo("to@example.org");
    mail.setFrom("from@example.org");
    mail.setSubject("Example Email");
    mail.setContent('my basic text content');

    mail.send();//returns either Status.ERROR or Status.OK, mail might not be sent yet, when this method returns
}    
				
			

You can enhance your options further by creating custom ISML templates, offering greater flexibility:

				
					function sendMail() {
    var template = new dw.util.Template("myTemplate.isml");
    var content = template.render(o);

    var mail = new dw.net.Mail();
    mail.addTo("to@example.org");
    mail.setFrom("from@example.org");
    mail.setSubject("Example Email");
    mail.setContent(content);

    mail.send();//returns either Status.ERROR or Status.OK, mail might not be sent yet, when this method returns
}    
				
			

But then, of course, comes the next question…

How do I test email templates in Commerce Cloud?

Unlike many Marketing Automation tools available today, the SFCC platform lacks an out-of-the-box email testing feature. However, there are ways to custom-build a solution.

Rendering Controller

With an ISML mail template, you can create a controller to display the email as a web page, using pre-defined parameters that enable developers and testers to easily see how it will appear.

Email Test Controller

You could also build a controller that will send an email to an address passed, with other parameters like:

				
					TestEmail-Send?email=myemail@mail.com&orderId=10000001

				
			

This would send an email for that specific order to the email passed, allowing testers and developers to verify the template without having to go through the entire checkout flow.

Configuring SPF Records

Sender Policy Framework (SPF) records are crucial for ensuring email deliverability. If this is not configured, providers such as Outlook and Gmail will simply prevent your emails from arriving. They will be completely blocked and will not even arrive in the spam folder.

Configuring these SPF records is clearly documented here.

Can I Use My Own SMTP Server?

A screenshot of the "Administration > Operations > Email Settings" screen, which enables administrators to configure custom email and DKIM settings.
Administration > Operations > Email Settings

Salesforce B2C Commerce Cloud supports the use of an external SMTP server for sending emails. If you already have an established SMTP service or require advanced configurations, using an external SMTP server allows you to gain more control over the email delivery process, customise your settings, and potentially improve email deliverability rates.

DKIM Support

Using these SMTP settings, you can set up DKIM directly on the same page in the Business Manager.

What rules must I follow?

Under typical usage, you should encounter no issues. However, certain security measures may lead to a lockout

In the worst case, you will not be able to send emails for 48 hours.

Are attachments possible?

Absolutely! This was among the first articles I published on the blog: How to send PDFs as attachments (though you’re certainly not restricted to just PDFs).

Can I send mails from the Composable Storefront?

Directly from the Managed Runtime? No.

You will have to make an API call to a custom API or hook into an existing standard API to attach sending an email through the platform.

Conclusion

While Salesforce B2C Commerce Cloud may not offer the same range of features as most Marketing Automation tools, it can still be highly effective for transactional messaging.

The post Sending Emails from Salesforce B2C Commerce Cloud: A Comprehensive Guide appeared first on The Rhino Inquisitor.

]]>