The Rhino Inquisitor https://www.rhino-inquisitor.com Get your insights on Salesforce Commerce Cloud B2C development! Fri, 05 Sep 2025 11:12:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://www.rhino-inquisitor.com/wp-content/uploads/2022/02/logo-wp-inquisitor.svg The Rhino Inquisitor https://www.rhino-inquisitor.com 32 32 The Realm Split: A Developer’s Field Guide to Migrating an SFCC Site https://www.rhino-inquisitor.com/the-realm-split-field-guide-to-migrating-an-sfcc-site/ Mon, 08 Sep 2025 09:14:15 +0000 https://www.rhino-inquisitor.com/?p=14159 Have you ever found yourself in a deployment-day standoff? Your team is ready to push a critical feature for the US site, but it’s blocked because a seemingly unrelated change for the EU site, which shares your codebase, has failed QA. You’re stuck. This kind of organisational friction, where independent business units become entangled in […]

The post The Realm Split: A Developer’s Field Guide to Migrating an SFCC Site appeared first on The Rhino Inquisitor.

]]>

Have you ever found yourself in a deployment-day standoff? Your team is ready to push a critical feature for the US site, but it’s blocked because a seemingly unrelated change for the EU site, which shares your codebase, has failed QA. You’re stuck. This kind of organisational friction, where independent business units become entangled in a shared technical fate, is a clear signal that your single Salesforce B2C Commerce Cloud realm is cracking under pressure. The technical dependencies that once streamlined operations now create bottlenecks, and the shared codebase that once promised efficiency has become a source of risk and frustration.

When this friction becomes unbearable, the business is faced with a monumental decision: a realm split. This is the architectural divorce of a site from its original family of instances, code, and data. It is a deliberate move to carve out a new, autonomous environment where a business unit can operate without being constrained by the priorities, schedules, and technical debt of its siblings. But like any divorce, it is complex, costly, and fraught with peril. A realm split is not a simple data replication or a POD move; it is a full-scale migration that touches every aspect of the platform, from the underlying infrastructure to third-party integrations and historical analytics.

This field guide serves as a comprehensive, battle-tested blueprint for SFCC developers and architects tasked with navigating this process. It provides a detailed plan of action that covers the strategic ‘why,’ the tactical ‘how,’ and the critical ‘what to watch out for.’ From justifying the immense cost and effort to executing a minute-by-minute cutover plan and managing the post-split reality, this document is designed to be the definitive resource for successfully cleaving a site into its own sovereign territory.

Deconstructing the Monolith: The 'Why' and 'When' of a Realm Split

Before embarking on such a significant undertaking, it is imperative to understand the foundational architecture and the specific pressures that lead to its fracture. A realm split is a solution to a problem that is often more organisational than technical, and its justification must be built on a solid understanding of both the platform’s structure and the business’s evolution.

A Technical Primer on the SFCC Realm

In the SFCC ecosystem, a realm is the fundamental organisational unit. It is not merely a collection of sites but the entire infrastructure stack provided by Salesforce to a customer. This stack contains all the necessary hardware and software components to develop, test, and deploy a storefront, including web servers, application servers, and database servers. For a developer, the realm is the entire world in which their sites live and operate.

This world is rigidly structured into two distinct groups:

  • Primary Instance Group (PIG): This is the core operational group, and a realm can have only one. It consists of three instances: Production (the live storefront), Staging (for data setup and pre-deployment testing), and Development (for data enrichment and configuration).
  • Secondary Instance Group (SIG): This group contains the developer sandboxes.  Like the PIG, a realm can only have one SIG.

This architecture is designed for efficiency under a unified operational model. Sites within the same realm can share a master product catalog, a single codebase, and a standard set of administrative and development teams, creating significant economies of scale. However, this inherent sharing is also its greatest weakness when the business model diverges from its core.

Analyzing the Breaking Points: When a Single Realm Becomes Untenable

The decision to split a realm is a lagging indicator of a fundamental misalignment between a company’s organizational structure and its technical architecture. The initial choice of a single realm is often based on an assumption of a unified business strategy. The need for a split arises when that assumption is no longer valid. This manifests through several distinct business and technical drivers.

A 16:9 cinematic image showing two business teams in separate, modern offices on opposite sides. One office has a US flag. Both teams look stressed and frustrated as they are confronted with large, chaotic diagrams of business processes projected in the air. The two different diagrams collide violently in the center with a bright spark, directly above a cracked and broken image of the Earth, symbolizing a global business breakdown.
When Workflows Clash: The Tipping Point for a Realm Split

Business Drivers

  • Divergent Business Processes: The most compelling reason for a split is when different business units can no longer operate under a single set of rules. For example, a pharmaceutical site that requires doctor’s prescription validation has a fundamentally different checkout and order processing logic than a standard apparel site. Forcing both to coexist in a single realm with a shared codebase leads to immense complexity and risk.
  • Independent P&L and Operational Autonomy: When business units have separate Profit & Loss (P&L) responsibilities, they often need the authority to define, prioritise, and fund initiatives independently. If the European division needs to launch a feature to meet a local market demand, it cannot be blocked by the North American division’s development schedule. A shared realm creates a zero-sum game for resources and deployment windows, whereas separate realms provide the necessary autonomy.
  • Global Teams and Conflicting Schedules: A single realm has a unified maintenance and release schedule. This becomes untenable for global teams operating in different time zones. A release that requires downtime at 2 AM in the US might be prime shopping time in the Asia-Pacific region. Separate realms allow for independent operational schedules tailored to each market.

Technical Drivers

  • Data Residency and Compliance: This is often a non-negotiable, legally mandated driver. Regulations like the GDPR may require that all personally identifiable information (PII) for European customers be stored in a data centre (or POD) located within the EU. If the existing realm is hosted on a US POD, a new, separate realm must be provisioned in the correct region to achieve compliance.
  • Codebase Complexity and Deployment Risk: As divergent business requirements are shoehorned into a single codebase, it inevitably becomes a tangled mess of site-specific conditional logic (if (site.ID === ‘US’) {… } else if (site.ID === ‘EU’) {… }). This increases the cognitive load for developers, slows down development, and dramatically increases the risk that a change for one site will have unintended, catastrophic consequences for another. A realm split allows the departing site to start with a clean, purpose-built codebase, free from the technical debt of its former siblings.
  • Performance and Governor Limits: While it should be the last option considered, severe performance degradation can be a driver for a split. If extensive code optimisation fails to resolve issues and distinct business units consistently create resource contention or hit governor limits, isolating the high-traffic or computationally intensive site in its own realm can restore stability for all parties.

The Cardinal Rule: A Realm Split is the Last Resort

It is crucial to understand that a realm split is a “complex and costly undertaking”. Before committing to this path, all other alternatives must be thoroughly investigated and exhausted. If the primary issue is storage, adopting a data archiving solution or purchasing additional storage from Salesforce is a far more straightforward and cost-effective option. 

If performance is the problem, a rigorous cycle of code optimisation and profiling should be the first course of action. A realm split introduces significant new overhead in terms of infrastructure costs and management complexity. 

It should only be pursued when the strategic benefits of autonomy and isolation unequivocally outweigh these substantial new burdens.

The Grand Blueprint: A Phased Plan of Attack

Executing a realm split is a major re-platforming project disguised as a migration. Success depends on a meticulously detailed, phased plan that accounts for every dependency, from stakeholder alignment to data integrity and third-party coordination. The following blueprint breaks the process down into six critical phases.

the grand blueprint v2
Success in a complex project like a realm split hinges on a meticulously detailed, phased plan. This image visualizes a team of experts collaborating on a holographic blueprint, representing the strategic and coordinated effort required to navigate the six critical phases of the migration.

Phase 1: The Scoping & Justification Gauntlet

This initial phase is about building the business case and creating a comprehensive map of the existing environment. Rushing this stage is a recipe for budget overruns and unforeseen complications.

  • Define Clear Goals and Objectives: Before any technical work begins, all stakeholders—business, marketing, development, and operations—must agree on what a successful split looks like. These goals should be specific and measurable (e.g., “The new EU site is live on the new realm with a 15% improvement in average page load time,” or “The EU development team can execute independent weekly deployments without impacting the US release schedule”). This provides a north star for the project and a clear definition of what is considered “done.”
  • Conduct a Thorough Audit of the Source Realm: A new realm is a clean slate; do not pollute it with the cruft of the old one. Conduct a deep audit of the source environment to identify and catalogue every component. This includes all custom cartridges, jobs, services, custom object definitions, site preferences, and integrations. Any stale, redundant, or unused metadata should be earmarked for cleanup before the migration begins. This reduces the complexity of the new environment and prevents future headaches.
  • Map Every Integration: This is one of the most critical and frequently underestimated tasks. Create a definitive diagram and inventory of every single third-party system that communicates with the SFCC instance. For each integration (payment gateways, tax services, OMS, ERP, PIM, etc.), determine its fate: Will it connect to the new realm only? Does it need to connect to both? Will it require a completely new configuration or even a new contract? Answering these questions early is essential for planning and vendor coordination.

Phase 2: Engaging the Gatekeepers - Navigating Salesforce Support

Several key steps in a realm split can only be performed by Salesforce. Engaging with their support and provisioning teams early and clearly is a hard dependency for the entire project.

  • Order the New Realm: A new realm is a new commercial product. The process begins by working with your Salesforce account executive to submit a standard realm order form. This initiates the provisioning of the new PIG and SIG infrastructure.
  • Data Migration Support: Some pieces of data can only be migrated by Salesforce; keep this dependency in mind!
  • Open the Go-Live Ticket: If the site being moved will be the first site in the new realm, you are required to go through the full, formal “Go Live” process. This is a structured engagement with Salesforce that has its own set of checklists, performance reviews, and timelines. This process must be initiated by opening a Go Live ticket well in advance of your target launch date.
  • Establish Clear Communication Channels: Use the Salesforce Help portal to log all cases related to the realm split. It is critical to ensure you select the correct B2C tenant/realm ID in the case details for both the source and destination environments. This ensures your requests are routed correctly and provides an official channel for coordinating the migration steps that require Salesforce intervention.

Phase 3: The Great Data Exodus - A Migration Deep Dive

The process of moving data is not a single “lift and shift” operation; it is a series of carefully orchestrated steps. The core strategy is to minimize downtime during the final cutover window by migrating the bulk of the data incrementally in the days or weeks leading up to the launch. Only the final “delta”—the data that has changed since the last sync—should be moved during the go-live event.

This process reveals a critical truth about a realm split: it is not a simple copy. It is the construction of a new, parallel stack that must be made to perfectly mirror the relevant parts of the old one. Every piece of data, code, and configuration must be explicitly migrated and, more importantly, validated in the new environment. The project plan must account for this re-validation effort, not just the migration itself. The most dangerous mindset a developer can have is “it worked in the old realm, so it will work in the new one.”

The Realm Split Data Migration Checklist

The complexity of data migration, with its varied methods and ownership, demands a single source of truth. The following table acts as a project management artifact, translating the plan into a clear, actionable checklist.

Also, please review this page carefully, as it contains a wealth of information on the migration plan you need to set up.

Data Object

When

Key Considerations & Risks

Primary Owner

Product Catalog

Continuous in the old and new realms

Includes products, categories, assignments, and sorting rules. Relatively low risk.

Dev Team / Merchandising

Price Books

Continuous in the old and new realms

Ensure all relevant price books are included. Test pricing thoroughly post-import.

Dev Team / Merchandising

Content Assets & Libraries

Manual syncs at pre-defined moments

Includes content assets, folders, and library assignments.

Dev Team / Content Team

Slot Configurations

Manual syncs at pre-defined moments

Verify slot configurations on all page types post-import.

Dev Team / Merchandising

Promotions & Campaigns

Manual syncs at pre-defined moments

Includes promotion definitions, campaigns, and customer groups.

Dev Team / Marketing

Custom Objects

Manual syncs at pre-defined moments

Export definitions via Site Export. Migrate data using custom jobs with dw.io classes.

Dev Team

Site Preferences & Metadata

Manual syncs at pre-defined moments

Many settings are included in site export, but some (e.g., sequence numbers) must be manually configured and verified.

Dev Team

Customer Profiles

Complete migration 1-2 weeks before the go-live, delta during and after

CRITICAL PII RISK: Ensure the Customer Sequence Number in the new realm is set higher than the highest customer number being imported to prevent duplicate IDs and data exposure.

Dev Team

Customer Passwords

Part of the Customer Profiles.

Passwords are encrypted, but can be exported and imported into different realms without any intervention from Salesforce.

Dev Team

Order History

Complete migration 1-2 weeks before the go-live, delta during and after

You can export and import orders yourself as long as the site is not marked “live”.

WARNING: Ensure that you complete importing customers before importing orders, as the linking of orders to the correct customer will not occur in the background otherwise.

MANDATORY: For a live site, order data migration must be performed by Salesforce Support. This is a hard dependency requiring at least 10 working days’ notice.

Dev Team / Salesforce Support

System-Generated Coupons

Salesforce Support Ticket

MANDATORY: To ensure existing coupons remain valid, the underlying “seeds” must be migrated by Salesforce Support. Requires a separate, specific support ticket.

Salesforce Support

Active Data & Einstein

Salesforce Support Ticket, During Go-Live

For different realms, this requires a support ticket.

Dev Team / SF Support

Phase 4: Rebuilding the Engine - Code, Config, and Integrations

With the new realm provisioned, the focus shifts to building out the application and its ecosystem.

  • Establish a New CI/CD Pipeline: Your existing deployment pipeline is tied to the old realm. A new, parallel pipeline must be created that targets the sandboxes and PIG instances of the new realm. The full codebase for the migrating site should be deployed and tested through this new pipeline.
  • Replicate and Validate Configuration: Although site import/export handles most of the configuration, several critical settings must be manually replicated and validated in the new realm’s Business Manager. This includes global preferences like sequence numbers, security settings, and any custom site preferences that are not part of the standard export package.
  • Execute the Integration Plan: This is where the audit from Phase 1 becomes an action plan.
  • Create New API Clients: An API client in Account Manager is tied to a specific realm and cannot be moved (although they can be used by a different realm). You must create entirely new API clients for the new realm, which will generate new Client IDs and secrets for every single integration.
  • Coordinate with Third-Party Vendors: Proactively contact every third-party vendor. Provide them with the new API credentials and the new hostnames for the Staging and Production instances. Update any IP allowlists on both sides. This process can take time and must be initiated well in advance of the planned cutover.

Phase 5: The Crucible of Testing

Rigorous, end-to-end testing is the only way to ensure a smooth launch. The new Staging instance is your battlefield for uncovering issues before they impact customers.

  • User Acceptance Testing (UAT): Business users and QA teams must conduct exhaustive testing of every user journey on the new Staging instance. This includes account creation, login, searching and browsing, adding to cart, applying promotions, and completing checkout with all supported payment methods.
  • End-to-End Integration Validation: It is not enough to test the SFCC functionality in isolation. Every single third-party integration must be tested end-to-end. Place test orders that flow through to your payment gateway, tax service, and Order Management System. Verify that inventory updates and shipping notifications are working correctly.
  • Performance and Load Testing: The new realm is a new set of infrastructure. Use the Business Manager Code Profiler to identify any server-side performance regressions. Conduct a full-scale load test against the new PIG (either on the pre-production environment or a dedicated “loaner realm”) to simulate peak traffic and ensure the new environment is appropriately scaled and configured to handle the production load.

Phase 6: The Cutover - A Minute-by-Minute Runbook

The go-live event should be a precisely choreographed execution of a pre-written script, not a series of improvisations. Create a hyper-detailed, time-stamped runbook that lists every action, its owner, and its expected duration. This runbook is law.

The sequence, based on Salesforce’s official guidance, is as follows:

  1. Final communications to all stakeholders. Freeze all administrative changes in the old realm’s Business Manager.
  2. Place the old site into Maintenance Mode. Immediately update the primary Salesforce Support ticket with the exact timestamp.
  3. Go-Live: The migration process can now begin. This 90-minute waiting period is a mandatory requirement from Salesforce.
  4. Execute the final delta data migration jobs (new customers, new orders, etc.). Coordinate closely with Salesforce Support as they perform their required migration tasks (active data).
  5. Migration Complete: Once all data is moved, perform a final smoke test on the new Production instance using internal hostnames (bypassing public DNS).
  6. Update the public DNS records (e.g., the www CNAME) to point the storefront domain to the new realm’s Production instance endpoint.
  7. Once DNS propagation is confirmed, set the new site’s status to Online in Business Manager. Update the Salesforce Support ticket again with the exact timestamp.
  8. Post-Launch Hypercare: All hands on deck. Intensively monitor server logs, analytics dashboards, and order flow for any anomalies. The project team should be on standby to address any immediate issues.

The SEO Minefield: Preserving Your Digital Ghost

seo minefield
Underestimating the SEO impact of a realm split is a catastrophic error that can wipe out years of search equity. This image visualizes the high-stakes process of navigating this "SEO minefield," where a single misstep can have explosive consequences. The illuminated path represents the meticulous, non-negotiable strategy—like a comprehensive 301 redirect map—required to safely migrate a site and preserve its valuable "digital ghost."

A realm split, from a search engine’s perspective, is essentially a complete site migration. Underestimating the SEO impact is a catastrophic error that can instantly wipe out years of accumulated search equity, traffic, and revenue. Google’s own representatives have stated that split and merge operations take “considerably longer for Google to process” than standard migrations because their algorithms must re-crawl and re-evaluate the entire structure of the new site (But if your site looks the same, has the same URL structure, etc, Google will not know anything changed at all – besides IP addresses). Patience and meticulous planning are paramount.

  • The 301 Redirect Map is Non-Negotiable: This is the single most critical SEO artefact for the migration. You must create a comprehensive map that pairs every single indexable URL on the old site with its corresponding URL on the new site. This includes the homepage, all category pages, all product detail pages, and any content or marketing pages. Use a site crawler to generate a comprehensive list of URLs from the old site, ensuring 100% coverage. These redirects must be implemented in Business Manager (Merchant Tools > SEO > URL Redirects) and be live the moment the new site is launched.

    Note: This only applies if the URL structure changed.

  • Replicate URL Rules and Configuration: The structure of your URLs is a key ranking signal. In the new realm’s Business Manager, you must meticulously replicate the URL Rules (Merchant Tools > SEO > URL Rules) from the old realm. Pay close attention to settings for forcing lowercase URLs and defining character replacements for spaces and special characters. This ensures that the URLs generated by the new site will match the old ones to the letter.
  • Manage Sitemaps and Robots.txt: The moment the new site is live and DNS has propagated, you must generate a new sitemap.xml file from the new realm’s Business Manager and submit it to Google Search Console (only if the location changed). This tells Google to begin crawling the new site structure. Simultaneously, ensure that the
    robots.txt file for the new site is correctly configured, allowing crawlers to access all important pages and blocking any non-public sections of the site (like internal search or cart pipelines).
  • Coordinate with SEO and Marketing Teams: SEO migration is not solely a technical task. The SEO and marketing teams must be integral members of the project team from day one. They are responsible for auditing the redirect map, setting up the new Google Search Console property, monitoring for crawl errors post-launch, and tracking keyword rankings and organic traffic to measure the impact of the migration.

The Developer's Survival Guide: Warnings, Pitfalls, and Pro-Tips

The difference between a smooth migration and a career-limiting disaster often comes down to awareness of the hidden dangers and adoption of battle-tested best practices. This is the hard-won wisdom from the trenches.

Red Alerts (Warnings)

  • Irreversible Analytics Loss: This cannot be overstated. Historical analytics data from Reports & Dashboards does not transfer to the new realm. The new realm begins with a zeroed-out dashboard. While you can still access the old data by selecting the old realm ID in the Reports & Dashboards interface, the data from the two realms is never combined into a single view

    Actionable Advice:
    Before the split, work with the business and analytics teams to identify and export all critical historical reports. This data must be preserved externally, as it will be inaccessible from the new realm’s reporting interface.
  • The Data Corruption Gauntlet: Heed the warnings from Salesforce Support. The cutover runbook is not a suggestion; it is a strict protocol. Changing the old site back to “Online” after the migration process has started, or failing to follow the instructions, can result in irreversible data corruption. There is no room for error in the cutover sequence.
  • PII and the Sequence Number Bomb: The warning about Customer Sequence Numbers is critical enough to repeat. Suppose you import customer profiles with customer numbers (e.g., cust_no = 5000) into a new realm where the sequence number is still at its default (e.g., 1000). In that case, the system will eventually start creating new customers with numbers that conflict with your imported data. This can lead to a catastrophic PII breach where one customer logs in and sees another customer’s profile, address, and order history. (e.g. a customer wasn’t imported because of “whatever reason”, and their number is “taken over”.)

    Actionable Advice:
    Before importing any customer data, go to Administration > Global Preferences > Sequence Numbers in the new realm and manually set the Customer Number to a value safely above the highest customer number in your import file.

Common Traps (Pitfalls)

  • The Obscure Realm Setting: A realm is more than what you see in Business Manager. There are underlying configurations that are only visible to Salesforce Support. 

    Lesson Learned: Never assume the new realm is a perfect 1:1 clone of the old one. Suppose you encounter a bizarre, inexplicable bug that defies all logical debugging. In that case, your next step should be to open a high-priority support case and make sure that they perform a full comparison of all underlying realm configurations between the source and the destination.
  • Forgetting the “Small” Data: It is easy to focus on the big-ticket items like products and orders and completely forget smaller but equally critical data points. The migration of system-generated coupon seeds is a perfect example. If you use these types of coupons and forget to open the specific support ticket to have the seeds migrated, all previously issued coupons will become invalid the moment you go live, leading to failed promotions and customer frustration.
  • Underestimating Integration Timelines: A realm split is an integration project for every single system that connects to SFCC. Third-party vendors have their own change control processes, support SLAs, and technical resource availability. Assuming a vendor can instantly provide new credentials or update an IP allowlist on your go-live day is a recipe for failure.

Veteran's Wisdom (Pro-Tips)

  • Rehearse, Rehearse, Rehearse: Conduct at least one, and preferably several, full dry runs of the entire data migration and cutover process. This can be done by migrating data from a sandbox in the old realm to a sandbox in the new realm. A rehearsal will inevitably uncover flawed assumptions, broken scripts, or missing data in a low-stakes environment, allowing you to fix the process before the high-pressure production event.
  • Script Everything Possible: Any manual step in a cutover plan is a potential point of failure. Automate as much of the process as you can. Write scripts for exporting and transforming data, for replicating configurations via Business Manager APIs (where possible), and for post-launch validation checks. Automation reduces the risk of human error and speeds up the execution of the runbook.
  • Benchmark Performance Before and After: Do not rely on subjective feelings about site speed. Before the split, use the Code Profiler and external web performance tools (like WebPageTest) to establish a clear, quantitative performance baseline for the site on the old realm. After the new site is live, run the exact same tests. This provides concrete data to demonstrate that performance has not degraded, allowing you to quantify any improvements gained from the new, isolated infrastructure.
  • Plan for a Rollback, but Work to Not Need It: Your runbook must include a clear “point of no return” and a detailed rollback plan. A rollback is technically possible in the early stages of the cutover (primarily by reverting the DNS change), as the old realm and site will still exist. However, it becomes exponentially more difficult with every new order and customer registration that occurs on the new site. The rollback plan is a critical safety net, but the primary focus should be on meticulous planning and testing to ensure it is never needed.

Conclusion: Life in the Multiverse

The successful launch of the migrated site is not the end of the project; it is the beginning of a new, more complex operational reality. The realm split achieves its primary goal of technical and business autonomy, but it does so by trading the constraints of a monolith for the complexities of a distributed system.

The new normal is a multi-realm architecture. On the positive side, the newly independent site now has the flexibility to develop and deploy on its own schedule, with a codebase tailored to its specific needs. The risk of a change for one brand impacting another is eliminated.

However, this autonomy comes at a price. The business now bears the increased infrastructure costs of an additional PIG and SIG, along with the added maintenance overhead of managing two separate codebases, two deployment pipelines, and two sets of administrative processes.

A cartoon-style illustration depicting a large, central digital structure breaking apart into several smaller, floating realms. Glowing streams of colorful data connect the new, separate realms, symbolizing their continued need for interconnectedness.
An illustration of a realm split, where a single, monolithic system fractures into multiple autonomous realms. This transition unlocks business and technical flexibility but introduces the new operational complexity of managing a distributed system, including the critical need for data synchronization between the separate entities.

One of the most significant new challenges is data synchronisation. If the business still requires a shared product catalog or consistent promotional data across realms, this can no longer be achieved through the platform’s native sharing capabilities. Sites in different realms cannot share a catalog directly. Instead, you must build and maintain a new operational process, likely a set of automated jobs and a CI/CD pipeline, to handle the export of data from a “master” realm and its import into the “subscriber” realm.

This introduces a new potential point of failure and a new set of tasks for the operations team.

Ultimately, a realm split is an immense undertaking that fundamentally reshapes a company’s digital commerce architecture. It is the right decision—and often the only decision—when the organisational friction and technical limitations of a single realm become an insurmountable barrier to growth. The significant cost and complexity are justified only when the business and technical autonomy it unlocks is a strategic necessity.

The post The Realm Split: A Developer’s Field Guide to Migrating an SFCC Site appeared first on The Rhino Inquisitor.

]]>
Taming the AI Beast: A Custom Toolkit to Supercharge Copilot for Salesforce Commerce Cloud https://www.linkedin.com/pulse/taming-ai-beast-custom-toolkit-supercharge-copilot-commerce-theunen-0i7se/#new_tab Tue, 19 Aug 2025 07:41:56 +0000 https://www.rhino-inquisitor.com/?p=14152 Have you ever felt like your AI pair programmer—be it GitHub Copilot, Claude, or Cursor—has a brilliant mind that suddenly develops amnesia the moment you mention dw.catalog.ProductMgr? You're not alone. The anxiety is palpable in the SFCC community. We've all been sold the dream of the AI-augmented developer, a future where we operate at a higher level of abstraction, leaving the boilerplate to our silicon partners. Yet, for those of us deep in the trenches of Salesforce B2C Commerce Cloud, the reality has been… frustrating.

The post Taming the AI Beast: A Custom Toolkit to Supercharge Copilot for Salesforce Commerce Cloud appeared first on The Rhino Inquisitor.

]]>

The post Taming the AI Beast: A Custom Toolkit to Supercharge Copilot for Salesforce Commerce Cloud appeared first on The Rhino Inquisitor.

]]>
Taming the Beast: A Developer’s Deep Dive into SFCC Meta Tag Rules https://www.rhino-inquisitor.com/taming-the-beast-a-developers-deep-dive-into-sfcc-meta-tag-rules/ Mon, 04 Aug 2025 07:13:04 +0000 https://www.rhino-inquisitor.com/?p=13525 Most of us have glanced at the "Page Meta Tag Rules" section in Business Manager, shrugged, and moved on to what we consider 'real' code. That's a mistake. This isn't just another BM module for merchandisers to tinker with; it's a declarative engine for automating one of the most tedious and error-prone parts of e-commerce SEO. It’s a strategic asset for developers to build scalable, maintainable, and SEO-friendly sites.

The post Taming the Beast: A Developer’s Deep Dive into SFCC Meta Tag Rules appeared first on The Rhino Inquisitor.

]]>

At some point in your Salesforce B2C Commerce Cloud career, you’ve been handed The Spreadsheet. It’s a glorious, terrifying document, often with 10,000+ rows, meticulously crafted by an SEO team. Each row represents a product, and each column contains the perfect, unique meta title and description destined to win the favour of the Google gods. Your heart sinks. You see visions of tedious data imports, endless validation, and the inevitable late-night fire drill when someone screams, “The staging data doesn’t match production!”.

Most of us have glanced at the “Page Meta Tag Rules” section in Business Manager, shrugged, and moved on to what we consider ‘real’ code. That’s a mistake. This isn’t just another BM module for merchandisers to tinker with; it’s a declarative engine for automating one of the most tedious and error-prone parts of e-commerce SEO. It’s a strategic asset for developers to build scalable, maintainable, and SEO-friendly sites.

This guide will dissect this powerful feature from a developer’s perspective. We’ll tame this beast by exploring its unique syntax, demystifying the “gotchas” of its inheritance model, and outlining advanced strategies for PDPs, PLPs, and even those tricky Page Designer pages. By the end, you’ll know how to leverage this tool to make your life easier and your SEO team happier, all without accidentally nuking their hard work.

The Anatomy of a Rule: Beyond the Business Manager UI

The first mental hurdle to clear is that Meta Tag Rules are not an imperative script. They are a declarative system. You are not writing code that executes line by line. Instead, you are defining a set of instructions—a recipe—that the platform’s engine interprets to generate a string of text. This distinction is fundamental because it dictates how these rules are built, tested, and debugged.

It’s a specialised, declarative Domain-Specific Language (DSL), not a general-purpose scripting environment like Demandware Script. This explains why you can’t just call arbitrary script APIs from within a rule and why the error feedback is limited. It’s about defining what you want the output to be and letting the platform’s engine figure out how to generate it.

The Three Pillars of Rule Creation

The process of creating a rule within Business Manager at Merchant Tools > SEO > Page Meta Tag Rules can be broken down into three logical steps :

Meta Tag Definitions (The "What")

A screenshot of the meta tag rule definitions screen in the Business Manager showing the description, og:url, robots, and title meta tag definition.

This is where you define the type of HTML tag you intend to create. Think of it as defining the schema for your output. You specify the Meta Tag Type (e.g., name, property, or title for the <title> tag) and the Meta Tag ID (e.g., description, keywords, og:title). For a standard meta description, the Type would be name and the ID would be description, which corresponds to <meta name="description"...>.

Rule Creation & Scopes (The "How" and "Where")

A screenshot of the Create Entry modal, displaying the form used to create a new rule for a specific scope, in this case, the Product Detail page.

This is the core logic. You create a new rule, give it a name, and associate it with one of the Meta Tag IDs you just defined. Critically, you must select a Scope. The scope (e.g., Product, Category/PLP, Content Detail/CDP) is the context in which the rule is evaluated. It determines which platform objects and attributes are available to your rule’s syntax. 

For example, the Product object is available in the Product scope, but not in the Content Listing Page scope.

Assignments (The "Who")

meta tag rule assignments sfcc

Once a rule is defined, you must assign it to a part of your site. You can assign a rule to an entire catalog, a specific category and its children, or a content folder. This assignment triggers the platform to use your rule for the designated pages.

The Syntax Cheat Sheet: Your Rosetta Stone

A futuristic, glowing blue holographic Rosetta Stone displaying various code symbols and syntax, representing a cheat sheet for a complex language.
Don't let the unique syntax of SFCC's Meta Tag Rules intimidate you. Think of this cheat sheet as your Rosetta Stone, unlocking the ability to create powerful, dynamic, and SEO-friendly tags for your entire site.

The rule engine has its own unique syntax, which is essential to master. All dynamic logic must be wrapped in ${...}. 

  • Accessing Object Attributes: The most common action is pulling data directly from platform objects. The syntax is straightforward: Product.name, Category.displayName, Content.ID, or Site.httpHostName. You can access both system and custom attributes, though some data types like HTML, Date, and Image are not supported.

  • Static Text with Constant(): To include a fixed string within a dynamic expression, you must use the Constant() function, such as Constant('Shop now at '). This is vital for constructing readable sentences.

Mastering Conditional Logic

The real power of the engine lies in its conditional logic. This is what allows for the creation of intelligent, fallback-driven rules.

  • The IF/THEN/ELSE Structure: This is the workhorse of the rule engine. It allows you to check for a condition and provide different outputs accordingly.

  • The Mighty ELSE (The Hybrid Enabler): The ELSE operator is the key to creating a “hybrid” approach that respects manual data entry. A rule like ${Product.pageTitle ELSE Product.name} first checks for a value in the manually-entered pageTitle attribute. If, and only if, that field is empty, it falls back to using the product’s name. This single technique is the most important for preventing conflicts between automated rules and manual overrides by merchandisers. 

  • Combining with AND and OR: These operators allow for more complex logic. AND requires both expressions to be true, while OR requires only one. They also support an optional delimiter, like AND(' | '), which elegantly joins two strings with a separator, but only if both strings exist. This prevents stray separators in your output.

  • Equality with EQ: For direct value comparisons, use the EQ operator. This is particularly useful for logic involving pricing, for instance, to check if a product has a price range (ProductPrice.min EQ ProductPrice.max) or a single price.

The Cascade: Understanding Inheritance, Precedence, and the Hybrid Approach

The Meta Tag Rules engine was designed with the “Don’t Repeat Yourself” (DRY) principle in mind. The inheritance model, or cascade, allows you to define a rule once at a high level, such as the root of your storefront catalog, and have it automatically apply to all child categories and products. This is incredibly efficient, but only if you understand the strict, non-negotiable lookup order the platform uses to find the right rule for a given page.

I’m not going to go into much detail here, as a complete fallback system is documented.

The Golden Rule: Building Hybrid-Ready Rules

The most common and damaging pitfall is the “Accidental Override.” Imagine a merchandiser spends days crafting the perfect, keyword-rich pageTitle for a key product. A developer then deploys a seemingly helpful rule like ${Product.name} assigned to the whole catalog. Because the rule is found and applied, it will silently overwrite the merchandiser’s manual work.

This isn’t just a technical problem; it’s a failure of process and collaboration. The platform’s inheritance model and conditional syntax force a strategic decision about data governance: will SEO be managed centrally via rules, granularly via manual data entry, or a hybrid of both? The developer’s job is not just to write the rule but to implement the agreed-upon governance model.

The solution is the Hybrid Pattern, which should be the default for almost every rule you create.

Example Hybrid PDP Title Rule: ${Product.pageTitle ELSE Product.name} | ${Site.displayName}

Let’s break down how the engine processes this:

  1. Product.pageTitle: The platform first checks the product object for a value in the pageTitle attribute. This is the field merchandisers use for manual entry in Business Manager (or hopefully imported from a third-party system).

  2. ELSE: If, and only if, the pageTitle attribute is empty or null, the engine proceeds to the expression after the ELSE operator. If pageTitle has a value, the rule evaluation stops, and that value is used.

This pattern provides the best of both worlds: automation and scalability for the thousands of products that don’t need special attention, and precise manual control for the high-priority pages that do. Adopting this pattern as a standard practice is the key to a harmonious relationship between development and business teams.

Advanced Strategies and Best Practices

Once you’ve mastered the fundamentals of syntax and inheritance, you can begin to craft mighty rules that go far beyond simple title generation.

Crafting Killer Rules: Practical Examples

The Perfect PDP Title (Hybrid)

Combines the product’s manual title, or falls back to its name, brand, and the site name. 

${Product.pageTitle ELSE Product.name AND Constant(' - ') AND Product.brand AND Constant(' | ') AND Site.displayName}

Scenario 1 (Manual pageTitle exists):

    Data: Product.pageTitle = “Best Trail Running Shoe for Rocky Terrain”
    Generated Output: Best Trail Running Shoe for Rocky Terrain

Scenario 2 (No manual pageTitle, falls back to dynamic pattern):

    Data:
    Product.name = “SummitPro Runner”
    Product.brand = “Peak Performance”
    Site.displayName = “GoOutdoors”

    Generated Output: SummitPro Runner - Peak Performance | GoOutdoors

The Engaging PLP Description (Hybrid)

Checks for a manual category description, otherwise generates a compelling, dynamic sentence. 

${Category.pageDescription ELSE Constant('Shop our wide selection of ') AND Category.displayName AND Constant(' at ') AND Site.displayName AND Constant('. Free shipping on orders over $50!')}

Scenario 1 (Manual pageDescription exists):

    Data: Category.pageDescription = “Explore our premium, all-weather tents. Designed for durability and easy setup, perfect for solo hikers or family camping trips.”

    Generated Output: Explore our premium, all-weather tents. Designed for durability and easy setup, perfect for solo hikers or family camping trips.

Scenario 2 (No manual pageDescription, falls back to dynamic pattern):

    Data:
    Category.displayName = “Camping Tents”
    Site.displayName = “GoOutdoors”

    Generated Output: Shop our wide selection of Camping Tents at GoOutdoors. Free shipping on orders over $50!

Dynamic OpenGraph Tags

Create separate rules for og:title and og:description using the same hybrid patterns. For og:image, you can access the product’s image URL. 

${ProductImageURL.viewType} (Note: The specific viewtype is needed, e.g. large)

    Scenario: A user shares a product page on a social platform.
    Data: The system has an image assigned to the product in the ‘large’ slot.
    Generated Output: https://www.gooutdoors.com/images/products/large/PROD12345_1.jpg

Dynamic OpenGraph Tags

This is a truly advanced use case that demonstrates how rules can implement sophisticated SEO strategy. This rule helps prevent crawl budget waste and duplicate content issues by telling search engines not to index faceted search pages. 

${IF SearchRefinement.refinementColor OR SearchPhrase THEN Constant('noindex,nofollow') ELSE Constant('index,follow')}

Scenario 1 (User refines a category by color):

A user is on the “Backpacks” category page and clicks the “Blue” color swatch to filter the results.

    Data: SearchRefinement.refinementColor has a value (“Blue”).
    Generated Output: noindex,nofollow
    Result: This filtered page won’t be indexed by Google, saving crawl budget.

Scenario 2 (User performs a site search):

A user types “waterproof socks” into the search bar.

    Data: SearchPhrase has a value (“waterproof socks”).
    Generated Output: noindex,nofollow
    Result: The search results page won’t be indexed.

Scenario 3 (User lands on a standard category page):

A user navigates directly to the “Backpacks” category page without any filters.

    Data: SearchRefinement.refinementColor is empty AND SearchPhrase is empty.
    Generated Output: index,follow
    Result: The main category page will be indexed by Google as intended.

The Page Designer Conundrum: The Unofficial Unofficial Workaround

Here we encounter a significant limitation: out of the box, the Meta Tag Rules engine does not work with standard Page Designer pages. The underlying Page API object lacks the necessary pageMetaTags. This creates a significant gap for sites that rely on content marketing and campaign landing pages built in Page Designer.

Luckily, an already complete working “workaround” example has been created by David Pereira here

The Minefield: Warnings, Pitfalls, and Troubleshooting

While powerful, the Meta Tag Rules engine is a minefield of potential “gotchas” that can frustrate developers and cause real business impact if not anticipated.

  • Warning – The “Accidental Override”: This cannot be overstated. A simple, non-hybrid rule (${Product.name}) deployed to production can instantly nullify months of careful, manual SEO work by the merchandising team. The Hybrid Pattern (${Product.pageTitle ELSE...}) is your shield. Always use it. This is fundamentally a process failure, not just a technical one, highlighting the need for a clear “contract” between development and business teams about who owns which data.

  • Pitfall – The “30-Minute Wait of Despair”: When you save or assign a rule in Business Manager, it can take up to 30 minutes for the change to appear on the storefront. This is due to platform-level caching. This delay is a classic initiation rite for new SFCC developers who are convinced their rule is broken. The solution is patience: save your rule, then go get a coffee before you start frantically debugging. (Note: I personally have never had to wait this long)

  • Pitfall – The Empty Attribute Trap: If your rule references an attribute (Product.custom.seoKeywords) that is empty for a particular product, the engine treats it as a null/false value. This can cause your conditional logic to fall through to an ELSE condition you didn’t expect. This underscores that the effectiveness of your rules is directly dependent on the quality and completeness of your catalog and content data.

Troubleshooting the "Black Box"

You cannot attach the Script Debugger to the rule engine or step through its execution. Troubleshooting is a process of indirect observation.

  1. Step 1: Preview in Business Manager: Your first and best line of defense. The SEO module has a preview function that lets you test a rule against a specific product, category, or content asset ID. This gives you instant feedback on the generated output without affecting the live site.

  2. Step 2: Inspect the Source: The ultimate source of truth is the final rendered HTML. Load the page on your storefront, right-click, and select “View Page Source.” Search for <title> or <meta name="description"> to see exactly what the engine produced.

  3. Step 3: The Code-Level Safety Net: As a developer integrating the rules into templates, you have one final check. The dw.web.PageMetaData object, which is populated by the rules, is available in the pdict. You can use the method isPageMetaTagSet('description') within an <isif> statement in your ISML template. This allows you to render a hardcoded, generic fallback meta tag directly in the template if, for some reason, the rule engine failed to generate one.

The Performance Question: Debunking the Myth

A common concern is that complex nested IF/ELSE rules might slow down page load times, but this is mostly a myth. The real performance issue relates to caching. For cached pages, the impact on performance is nearly nonexistent because the server evaluates the rule only once when generating the page’s HTML during the initial request. This HTML is then stored in the cache. Future visitors receive this pre-rendered static HTML directly from the cache, skipping re-evaluation. The small performance cost only occurs on cache misses. Thus, the focus shouldn’t be on creating overly simple rules but on maintaining a high cache hit rate. 

We can be confident that the Salesforce team has developed an effective feature to guarantee optimal performance. Keep in mind the platform cache with a 30-minute delay we previously mentioned. Within that “black box,” a separate system is likely also in place to protect performance.

The Final Verdict: Meta Tag Rules vs. The Alternatives

When deciding how to manage SEO metadata in SFCC, developers face three philosophical choices:

Manual Entry Only (The Control Freak)

  • Manually populating the pageTitle, pageDescription, etc., for every item in Business Manager.

    • Pros: Absolute, granular control. Perfect for a small catalog or a handful of critical landing pages.

    • Cons: Completely unscalable. Highly prone to human error and data gaps. A maintenance and governance nightmare for any site of significant size.

Custom ISML/Controller Logic (The Re-inventor)

Ignoring the rule engine and writing your own logic in controllers and ISML templates to build meta tags.

  • Pros: Theoretically unlimited flexibility. You can call external services, perform complex calculations, etc..  

  • Cons: You are re-inventing a core platform feature, which introduces significant technical debt. The logic is completely hidden from business users, making it a black box that only developers can manage. It’s harder to maintain and creates upgrade path risks.

Meta Tag Rules (The Pragmatist)

Using the native feature as intended.

  • Pros: The standard, platform-supported, scalable, and maintainable solution. The logic is transparent and manageable by trained users in Business Manager. It fully supports the hybrid approach, offering the perfect balance of automation and control.

  • Cons: You are constrained by the built-in DSL. It has known limitations, like the Page Designer issue and syntax, that may require custom workarounds.

What about the PWA Kit?

Yes, you can absolutely continue to leverage the power of Page Meta Tag Rules from the Business Manager in a headless setup. The key is understanding that your headless front end (like a PWA) communicates with the SFCC backend via APIs. 

While historically this might have required a development task to extend a standard API or create a new endpoint to expose the dynamically generated meta tag values, this is becoming increasingly unnecessary. Salesforce is actively expanding the Shopper Commerce API (SCAPI), continuously adding new endpoints and enriching existing ones to expose more data directly.

This ongoing expansion, as seen with enhancements to APIs like Shopper Search and Shopper Products, means that the SEO-rich data generated by your rules is more likely to be available out of the box. Instead of building custom solutions, the task for developers is shifting towards simply querying the correct, updated SCAPI endpoint. 

This evolution makes it easier than ever to fetch the meta tags for these pages. It validates the headless approach, allowing you to maintain a robust, centralised SEO strategy in the Business Manager while fully embracing the flexibility and performance of a modern front-end architecture.

sfcc updates headless apis for meta tag rules

Conclusion: Go Forth and Automate

Salesforce B2C Commerce Cloud’s Page Meta Tag Rules are far more than a simple configuration screen. They are a strategic tool for building scalable, efficient, and collaborative e-commerce platforms. By mastering the hybrid pattern, understanding the inheritance cascade, knowing how to tackle limitations like the Page Designer gap, and—most importantly—communicating with your business teams, you can transform SEO from a manual chore into an automated powerhouse.

So, the next time that dreaded SEO spreadsheet lands in your inbox, don’t sigh and start writing an importer. Crack open the Page Meta Tag Rules, build some smart, hybrid rules, and go grab a coffee. You’ve just saved your future self hundreds of hours of pain.

You’re welcome.

The post Taming the Beast: A Developer’s Deep Dive into SFCC Meta Tag Rules appeared first on The Rhino Inquisitor.

]]>
Field Guide to Custom Caches: Wielding a Double-Edged Sword https://www.rhino-inquisitor.com/field-guide-to-custom-caches-in-sfcc/ Mon, 28 Jul 2025 07:32:55 +0000 https://www.rhino-inquisitor.com/?p=13328 You think you know caching. You’ve enabled page caching, fiddled with content slot TTLs, and called it a day. And your Salesforce B2C Commerce Cloud site is still slower than a snail in molasses. Why? Because you're ignoring the most potent weapon in your performance arsenal: the Custom Cache.

The post Field Guide to Custom Caches: Wielding a Double-Edged Sword appeared first on The Rhino Inquisitor.

]]>

You think you know caching. You’ve enabled page caching, fiddled with content slot TTLs, and called it a day. And your Salesforce B2C Commerce Cloud site is still slower than a snail in molasses. Why? Because you’re ignoring the most potent weapon in your performance arsenal: the Custom Cache.

Custom Caches are a double-edged sword, though. Wielded with discipline, precision, and a deep understanding of their limitations, they are one of the most potent performance-tuning instruments in your arsenal. Wielded carelessly, they will cut you, your application, and your customer’s experience to ribbons. The problem is that the platform’s API for dw.system.CacheMgr is deceptively simple, masking a minefield of architectural traps for the unwary developer.   

This is not a beginner’s tutorial. This is a field guide for the professional SFCC developer who needs to move beyond basic usage and master this powerful, perilous feature. We’re going to charge headfirst into the complexity, expose the sharp edges, and arm you with the patterns and discipline required to use Custom Caches safely, effectively, and with confidence. 

The Lay of the Land: Choosing Your Data Store

Before you even think about writing CacheMgr.getCache(), you need to understand its purpose. Using the wrong tool for the job is the first step toward disaster. 

In SFCC, you have several options for storing temporary data, and choosing the correct one is a foundational architectural decision.

Custom Cache vs. Page Cache: A Quick Primer

Developers new to the platform frequently conflate Custom Caches and the Page Cache. They are fundamentally different beasts operating at different layers of the architecture. Mistaking one for the other is like using a hammer to turn a screw.

  • Page Cache is for caching rendered output. It operates at the web server tier and stores full HTTP responses—typically HTML fragments generated from ISML templates. You control it with the <iscache> tag or the response.setExpires() script API method. When a request hits a URL whose response is in the Page Cache, the web server serves it directly, never even bothering the application server. It is incredibly fast and is the primary defence against high traffic for storefront pages.

  • Custom Cache is for caching application data. It operates at the application server tier and stores JavaScript objects and primitives inside a script or controller’s execution context. You control it exclusively through the dw.system.CacheMgr script API. It’s designed to avoid recalculating expensive data or re-fetching it from an external source during the execution of a controller that will ultimately produce a response.

The distinction is critical: Cache the final, cooked meal with Page Cache, cache the raw ingredients with Custom Cache. To avoid re-rendering a product tile’s HTML, use Page Cache with a remote include. If you need to avoid re-fetching the product’s third-party ratings data before you render the tile, use a Custom Cache.

The Developer's Dilemma: request vs. session vs. CacheMgr

Within the application tier, you have three primary ways to store temporary, non-persistent data during script execution. Their scopes and lifetimes are vastly different, and choosing the wrong one can lead to performance degradation, security vulnerabilities, or bizarre bugs.

  • request.custom: This object lives for the duration of a single HTTP request. It is the most ephemeral of the scopes. Its primary purpose is to pass data between middleware steps in an SFRA controller chain or from a controller to the rendering template within the same server call. It’s a scratchpad for the current transaction and nothing more.
  • session.custom / session.privacy: These objects live for the duration of a user’s session. The platform defines this with a 30-minute soft timeout (which logs the user out and clears privacy data) and a six-hour hard timeout (after which the session ID is invalid). This scope is user-specific and sticky to a single application server. The critical difference is that writing to session.custom can trigger a re-evaluation of the user’s dynamic customer groups, while session.privacy does not. Data in session.privacy is also automatically cleared on logout.
  • dw.system.CacheMgrThis is an application-wide, server-specific cache. The data is shared by all users and all sessions that happen to land on the same application server. Its lifetime is determined either by a configured time-to-live (TTL) or until a major invalidation event occurs, such as a code activation or data replication.

 

The Forge: Mechanics of a Custom Cache

Once you’ve determined that a Custom Cache is the right tool, implementation requires a precise, methodical approach. There is no room for improvisation. Follow these steps as a mandatory checklist.

The Blueprint: Defining Caches in caches.json

Image Alt Text: A friendly cartoon character in a flat vector style, building a data cache from a blueprint, with vibrant data lines flowing into the structure.

Your cache’s life begins with a simple declaration. This is done in a JSON file, conventionally named caches.json, which must reside within your cartridge.       

1. Create caches.json: Inside your cartridge, create the file. For example: int_mycartridge/caches.json.

2. Define Your Caches: The file contains a single JSON object with a caches key, which is an array of cache definitions. Each definition requires an id and can optionally include an expireAfterSeconds property.

				
					{
  "caches": [
    {
      "id": "UnlimitedTestCache"
    },
    {
      "id": "TestCacheWithExpiration",
      "expireAfterSeconds": 10
    }
  ]
}
				
			

The id must be globally unique across every single cartridge in your site’s cartridge path. A duplicate ID will cause the cache to silently fail to initialize, with the only evidence being an error in the logs. The expireAfterSeconds sets a TTL for entries in that cache. If omitted, entries have no time-based expiration and persist until the next global cache clear event.

3. Register in package.json: The platform needs to know where to find your definition file. Reference it in your cartridge’s package.json using the caches key. The path is relative to the package.json file itself.

				
					{
    "caches": "./caches.json"
}
				
			

4. Enable in Business Manager: Finally, you must globally enable the custom cache feature. Navigate to Administration > Operations > Custom Caches and check the “Enable Caching” box.  Disabling this will clear all custom caches on the instance. This page will also become your primary tool for monitoring cache health.

A screenshot of the "Administration > Operations > Custom Caches" screen in the business manager.
A screenshot of the "Administration > Operations > Custom Caches" screen in the business manager.

The Core API Arsenal: CacheMgr and Cache

The script API for interacting with your defined caches is straightforward, revolving around two classes: dw.system.CacheMgr and dw.system.Cache.  

  • CacheMgr.getCache(cacheID)This is your entry point. It retrieves the cache object that you defined in caches.json. 

  • cache.put(key, value): Directly places an object into the cache under a specific key, overwriting any existing entry.

  • cache.get(key): Directly retrieves an object from the cache for a given key. It returns undefined if the key is not found. 

  • cache.invalidate(key): Manually removes a single entry from the cache.

While these methods are simple, using them directly is a beginner’s trap. A typical but flawed pattern is 

if (!cache.get(key)) { cache.put(key, loadData()); }

This code is not atomic. On a busy server, two concurrent requests could both evaluate the if condition as true, both execute the expensive loadData() function, and one will wastefully overwrite the other’s result. This is inefficient and can lead to race conditions.

The "Get-or-Load" Pattern: The Only Way to Populate Your Cache

There is a better way. It is the (in my opinion) only acceptable way to read from and write to a custom cache: the cache.get(key, loader) method.

This method combines the get and put operations into a single, atomic action on the application server. It attempts to retrieve the value for a key. If it’s a miss, it executes the loader callback function, places the function’s return value into the cache, and then returns it. If the loader function returns undefined (not null), the failure is not cached. This keeps your logic clean and concise. (And hopefully, behind that black box, the concurrency conundrum has been taken care of 😇)

Here is the implementation for fetching data from a third-party API:

				
					var CacheMgr = require('dw/system/CacheMgr');
var MyHTTPService = require('~/cartridge/scripts/services/myHTTPService');
var Site = require('dw/system/Site');

/**
 * Retrieves data for a given API endpoint, utilizing a custom cache.
 * @param {string} apiEndpoint - The specific API endpoint to call.
 * @returns {Object|null} - A plain JavaScript object with the API data, or null on failure.
 */
function getApiData(apiEndpoint) {
    // Retrieve the cache defined in caches.json
    var apiCache = CacheMgr.getCache('ExternalRatingsAPI');

    // Construct a robust, unique cache key
    var cacheKey = Site.current.ID + '_api_data_' + apiEndpoint;

    // Use the get-or-load pattern.
    var result = apiCache.get(cacheKey, function() {
        // This loader function only executes on a cache miss for this specific key.
        var service = MyHTTPService.getService();
        var serviceResult = service.call({ endpoint: apiEndpoint });

        // Check for a successful result before caching
        if (serviceResult.ok && serviceResult.object) {
            // IMPORTANT: Return a simple JS object, not the full service result.
            // This prevents caching large, complex API objects.
            try {
                return JSON.parse(serviceResult.object.text);
            } catch (e) {
                // Failed to parse, don't cache the error.
                return undefined;
            }
        }

        // Returning undefined prevents caching a failure.
        return undefined;
    });

    return result;
}
				
			

The Art of the Key: Your Cache's True Identity

Developers often obsess over the value being cached, but this is a strategic error. The value is just data; the key is the entire strategy. A poorly designed key will lead to cache collisions (serving wrong data), or cache misses (negating any performance benefit). 

An anti-pattern, such as adding a dynamic and irrelevant product position parameter to a product tile’s cache key, can lead to a near-zero hit rate, rendering the cache completely useless.

The Anatomy of a Perfect Key

A robust cache key is not just a string; it’s a self-documenting, collision-proof identifier. Every key you create should be:

  1. Unique: It must uniquely identify a single piece of cacheable data.

  2. Predictable: You must be able to deterministically reconstruct the exact same key whenever you need to access the data.

  3. Scoped: It must contain all the context necessary to distinguish it from similar data for other sites, locales, or conditions.

A highly effective pattern is to build keys from concatenated, delimited parts: PURPOSE::SCOPE::IDENTIFIER::CONTEXT.

  • Bad Key: '12345' (What is it? A product? A category? For which site?)

  • Good Key: 'product_tile_data::RefArch_US::12345_blue::en_US'

This structure prevents a product cache from colliding with a content cache, ensures data for the US site doesn’t leak into the EU site, and makes debugging from logs infinitely easier because the key itself tells you exactly what it’s for. Always include Site.current.ID and the current locale for any site- or language-specific data.

The Complexity of Excess

While it might seem clever to make your cache key highly specific and unique, this can backfire by reducing the chances of cache hits.

Striking the right balance is key. (pun intended)

I’ve also seen situations where the effort spent retrieving extensive data from the database to craft the key ends up cancelling out the performance benefits of custom caching.
After all, if generating the key takes longer than the cache saves, it’s time to rethink the approach.

The Serialization Conundrum: Caching API Objects vs. POJOs

You must not cache raw SFCC API objects. Never put a dw.catalog.Product, dw.order.Order, or dw.catalog.ProductInventoryList object directly into the cache.

While the documentation ambiguously states that “tree-like object structures” can be stored, this is a siren song leading to disaster. These API objects are heavyweight, carry live database connections, are not truly serializable, and can easily blow past the 128KB per-entry size limit, causing silent write failures that are only visible in the logs. 

The only performant and safe approach is to map the data you need from the heavy API object into a lightweight Plain Old JavaScript Object (POJO) or Data Transfer Object (DTO) before caching it.

Anti-Pattern: Caching the Full API Object

				
					// DO NOT DO THIS
var ProductMgr = require('dw/catalog/ProductMgr');
var productCache = CacheMgr.getCache('ProductData');

productCache.get('some-product-id', function () {
    var product = ProductMgr.getProduct('some-product-id');
    return product; // Caching the entire, heavy dw.catalog.Product object
});
				
			

Correct Pattern: Caching a Lightweight POJO

				
					// THIS IS THE CORRECT WAY
var ProductMgr = require('dw/catalog/ProductMgr');
var productCache = CacheMgr.getCache('ProductData');

productCache.get('some-product-id', function () {
    var product = ProductMgr.getProduct('some-product-id');
    if (!product) {
        // We store null in the cache
        return null;
    }

    // Create a lightweight POJO with only the data you need
    var productPOJO = {
        id: product.ID,
        name: product.name,
        shortDescription: product.shortDescription? product.shortDescription.markup : '',
        price: product.priceModel.price.value
    };

    return productPOJO; // Cache the small, clean object
});
				
			

This approach creates smaller, faster, and safer cache entries. It decouples your cached data from the live object model and respects the platform’s limitations.

The release notes even mention that custom caches are intended to return immutable objects, reinforcing that you should be working with copies of data, not live API instances. 

In the Trenches: Real-World Battle Plans

With the theory and mechanics established, let’s apply them to the most common scenarios where custom caches provide the biggest performance wins.

Use Case 1: Taming External API Latency

This is the poster child for custom caches. Your site needs to display real-time shipping estimates, user-generated reviews, or social media feeds from a third-party service. Making a live HTTP call on every page load is a recipe for a slow, unreliable site. By wrapping the service call in the “get-or-load” pattern, you can cache the response for a few minutes, drastically reducing latency and insulating your site from temporary blips in the third-party service’s availability. 

Remember, there’s another option I mentioned in a previous article: using the ServiceRegistry for caching.

Use Case 2: Caching Expensive Computations

Some business logic is just plain expensive. The classic example is determining if a main product should display an “On Sale” banner by iterating through all of its variation products to check their promotion status. On a product grid page with 24 products, each with 10 variants, this could mean hundreds of object inspections just to render the page. This is a perfect candidate for a custom cache.

Calculate the result once, store the simple boolean result in a cache with a key like'main_promo_status::' + mainPid, and set a reasonable TTL (e.g., 15 minutes) to align with promotion update frequencies.

Use Case 3: "Configuration as Code"

Instead of fetching site-level configurations or feature switches directly from the database through Site Preferences or Custom Objects, you can load these configurations into a custom cache helper function that loads this data into a long-lived custom cache on the first request; subsequent requests will retrieve the configuration directly from memory. 

This approach significantly reduces the load on the database while providing lightning-fast access to configuration data.

The Minefield: Warnings, Anti-Patterns, and How to Survive

Now for the most crucial section of this guide. Understanding these pitfalls is what separates a developer who uses caches effectively from one who creates production incidents.

The Great Myth: Cross-Server Invalidation

Let this be stated as clearly as possible: There is no reliable, built-in mechanism to invalidate a single custom cache key across all application servers in a production environment.

The cache.invalidate(key) method is a trap. It is functionally useless for ensuring data consistency on a multi-server POD. It only clears the key on the single application server that happens to execute the code. The other 2, 5, or 10 servers in the instance will continue to happily serve the stale data until their TTL expires or a global event occurs.

The only ways to reliably clear a custom cache across an entire instance are these “sledgehammer” approaches :  

  • Data Replication: A full or partial data replication will clear all custom caches.

  • Code Activation: Activating a new code version clears all custom caches.

  • Manual Invalidation: A Business Manager user navigating to Administration > Operations > Custom Caches and clicking the “Clear” button for a specific cache (for each app server).

This limitation has profound architectural implications. It means you must design your caching strategy around time-based expiration (expireAfterSeconds). You have to accept and plan for a window of potential data staleness. Do not attempt to build a complex, event-driven invalidation system (e.g., trying to have a job invalidate a key). It is doomed to fail in a multi-server environment.

Caching User-Specific Data

A cardinal sin. Never put Personally Identifiable Information (PII) or any user-specific data in a global custom cache. It is a massive security vulnerability and functionally incorrect, as the data will be shared across all users on that server. 

Use session.privacy for user-specific data.

The Rogue's Gallery: Other Common Pitfalls

  • Ignoring the 20MB Total Limit: This is a hard limit for all custom caches on a single application server. One misbehaving cache that stores massive objects can pollute the entire 20MB space, causing the eviction of other, well-behaved caches. 

  • Ignoring the 128KB Entry Limit: Trying to put an object larger than 128KB will result in a “write failure” that is only visible in the Business Manager cache statistics and custom logs. It does not throw an exception, so your code will appear to work while the cache remains empty.

  • Assuming Cache is Persistent: It is transient, in-memory storage. It is not a database. A server restart, code deployment, or random eviction can wipe your data at any time. Your code must always be able to function correctly on a cache miss.

The Watchtower: Monitoring Your Cache's Health

You cannot manage what you do not measure. A “set it and forget it” approach to caching is irresponsible. You must actively monitor the health and performance of your caches.

Reading the Tea Leaves: The Business Manager Custom Caches Page

Your primary dashboard is located at Administration > Operations > Custom Caches. This page lists all registered caches and provides statistics for the last 15 minutes on the current application server. The key metrics to watch are:

  • Hits / Total: This is your hit ratio. For a frequently accessed cache, this number should be very high (ideally 95%+). A low hit ratio means your cache is ineffective. This could be due to poorly designed keys, a TTL that is too short, or constant cache clearing.

  • Write Failures: This number must be zero. A non-zero value is a critical alert. It almost certainly means you are violating the 128KB per-entry size limit, likely by trying to cache a full API object instead of a POJO.

  • Clear Button: The manual override. Use it when you need to force a refresh of a specific cache’s data across all application servers.

A Debugging Workflow: From Dashboard to Code

When you identify a performance problem, follow this systematic process to diagnose cache-related issues :

  1. Observe (Production): Start in Reports & Dashboards > Technical. Sort by “Percentage of Processing Time” or “Average Response Time” to find your slowest controllers and remote includes. These are your top suspects. Note their cache hit ratios in the report. A low hit ratio on a slow controller is a huge red flag.

  2. Hypothesize (Business Manager): Go to the Custom Caches page. Does the slow controller use a custom cache? Is that cache showing a low hit rate or, worse, write failures? This helps correlate the storefront performance issue with a specific cache’s health.

  3. Reproduce & Pinpoint (Development): Switch to a development instance. Use the Pipeline Profiler to get a high-level timing breakdown of the suspect controller. This tool confirms which parts of the request are slow, but it does not show cached requests. To dig deeper into the code itself, use the  

  4. Code Profiler. Run the uncached controller and look for the specific script lines or API calls that consume the most execution time. This will tell you exactly what expensive operation needs to be wrapped in a cache call.

Wielding the Cache with Confidence

Custom Caches are not inherently good or bad. They are powerful. And like any powerful tool, they demand respect, understanding, and discipline. The path to mastery is not through memorising API calls, but through internalising a set of non-negotiable principles.

  1. Cache Data, Not HTML: Use Custom Cache for application data, Page Cache for rendered output.

  2. Choose the Right Scope: Understand the difference between request, session, and cache. Misuse is costly.

  3. The Key is the Strategy: Be deliberate and systematic in how you name things. A good key is self-documenting and collision-proof.

  4. Embrace “Get-or-Load”: The cache.get(key, loader) pattern is the only safe and atomic way to populate a cache. Use it. Always.

  5. Cache POJOs, Not API Objects: Map heavy API objects to lightweight POJOs before caching to save memory and avoid errors.

  6. Accept the Invalidation Myth: Granular, cross-server invalidation is not a feature. Design around TTL and embrace a small window of potential staleness.

  7. Monitor Relentlessly: Use the Business Manager dashboards and profilers to keep a constant watch on your cache’s health.

By adhering to these rules, you transform the custom cache from a source of unpredictable bugs into a reliable, high-performance asset.

The post Field Guide to Custom Caches: Wielding a Double-Edged Sword appeared first on The Rhino Inquisitor.

]]>
Session Sync Showdown: From plugin_slas to Native Hybrid Auth in SFRA and SiteGenesis https://www.rhino-inquisitor.com/slas-in-sfra-or-sitegenesis/ https://www.rhino-inquisitor.com/slas-in-sfra-or-sitegenesis/#comments Thu, 24 Jul 2025 20:52:39 +0000 https://www.rhino-inquisitor.com/?p=617 Headless APIs have been available in Salesforce B2C Commerce Cloud for some time, under the “OCAPI (Open Commerce API.).” In 2020, a new set of APIs, known as the SCAPI (Salesforce Commerce API), was introduced. Within that new set of APIs, a subset was focused on giving developers complete control of the login process of […]

The post Session Sync Showdown: From plugin_slas to Native Hybrid Auth in SFRA and SiteGenesis appeared first on The Rhino Inquisitor.

]]>

Headless APIs have been available in Salesforce B2C Commerce Cloud for some time, under the “OCAPI (Open Commerce API.).” In 2020, a new set of APIs, known as the SCAPI (Salesforce Commerce API), was introduced.

Within that new set of APIs, a subset was focused on giving developers complete control of the login process of customers, called SLAS (Shopper Login And API Access Service). In February 2022, Salesforce also released a cartridge for SFRA, enabling easy incorporation of SLAS within your current setup.

But let’s cut to the chase. The plugin_slas cartridge (which we will discuss later in the article) was a necessary bridge for its time, but it also introduced performance bottlenecks, API quota concerns, and maintenance headaches. 
With the release of native Hybrid Authentication, Salesforce has fundamentally changed the game for hybrid SFRA/Composable storefronts. This guide is your in-depth exploration of the “why” and “how”—we’ll dissect the architectural shift and equip you with the strategic insights you need.

What is SLAS?

A diagram showing the different steps of the SLAS process.

But what is SLAS, anywho? It is a set of APIs that allows secure access to Commerce Cloud shopper APIs for headless applications.

Some use-cases:

  • Single Sign-On: Allow your customers to use a single set of log-ins across multiple environments (Commerce Cloud vs. a Community Portal)

  • Third-Party Identity Providers: Use third-party services that support OpenID like Facebook or Google.

Why use SLAS?

Looking at the above, you might think: “But can’t I already do these things with SFRA and SiteGenesis?”

In a way, you’re right. These login types are already supported in the current system. However, they can’t be used across other applications, such as Endless Aisle, kiosks, or mobile apps, without additional development. You will need to create custom solutions for each case.

SLAS is a headless API that can be used by all your channels, whether they are Commerce Cloud or not.

Longer log-in time

People familiar with Salesforce B2C Commerce Cloud know that the storefront logs you out after 30 minutes of inactivity. Many projects have requested a longer session, especially during checkout, as this can be particularly frustrating. 

Previously, extending this timeout wasn’t possible. Now, with SLAS, you can increase it up to 90 days! Yes, you read correctly—a significant three-month extension compared to previous options!

The Old Guard: A Necessary Evil Called plugin_slas

To understand where we’re going, we have to respect where we’ve been. When Salesforce B2C Commerce Cloud began its push into the headless and composable world with the PWA Kit, a significant architectural gap emerged. 

The traditional monoliths, Storefront Reference Architecture (SFRA) and SiteGenesis, managed user sessions using a dwsid cookie. The new headless paradigm, however, operates on a completely different authentication mechanism: the Shopper Login and API Access Service (SLAS), which utilises JSON Web Tokens (JWTs).

For any business looking to adopt a hybrid model—keeping parts of their site on SFRA while building new experiences with the PWA Kit—this created a jarring disconnect. How could a shopper’s session possibly persist across these two disparate worlds?

The Problem It Solved: A Bridge Over Troubled Waters

Salesforce’s answer, released in February 2022, was the plugin_slas cartridge. It was designed as a plug-and-play solution for SFRA that intercepted the standard login process. Instead of relying on the traditional dw.system.Session script API calls for authentication, the cartridge rerouted these flows through SLAS. This clever maneuver effectively “bridged” the two authentication systems, allowing a shopper to navigate from a PWA Kit page to an SFRA checkout page without losing their session or their basket.  

For its time, the cartridge was a critical enabler. It unlocked the possibility of hybrid deployments and introduced powerful SLAS features to the monolithic SFRA world, such as integration with third-party Identity Providers (IDPs) like Google and Facebook, as well as the much-requested ability to extend shopper login times from a paltry 30 minutes to a substantial 90 days.

The Scars It Left: The True Cost of the Cartridge

While the plugin_slas cartridge solved an immediate and pressing problem, it came at a significant technical cost. Developers on the front lines quickly discovered the operational friction and performance penalties baked into its design.

  • The Performance Tax: The cartridge introduced three to four remote API calls during login and registration. These weren’t mere internal functions; they involved network-heavy SCAPI and OCAPI calls used for session bridging. This design resulted in noticeable latency during the crucial authentication phase. Every login, registration, and session refresh experienced this delay, impacting user experience.

  • The API Quota Black Hole: This was perhaps the most challenging issue for development teams, especially when the quota limit was still 8 – this is now 16, luckily. B2C Commerce enforces strict API quotas that cap the number of API calls per storefront request. The plugin_slas cartridge could consume four, and in some registration cases, even five API calls just to log in a user.

    Using nearly half of the API limit for authentication alone was a risky strategy. This heavily restricted other vital operations, such as retrieving product information, checking inventory, or applying promotions, all within the same request. It led to constant stress and compelled developers to create complex, fragile workarounds.

  • The Maintenance Quagmire: As a cartridge, plugin_slas was yet another piece of critical code that teams had to install, configure, update, and regression test. When Salesforce released bug fixes or security patches for the cartridge, it required a full deployment cycle to get them into production. This added operational overhead and introduced another potential point of failure in the authentication path, a path that demands maximum stability and security. The cartridge was a tactical patch on a strategic problem, and its very architecture—an external add-on making remote calls back to the platform—was the root cause of its limitations.

The New Sheriff in Town: Platform-Native Hybrid Authentication

A classic-style robot labeled "plugin_slas cartridge" hands a glowing purple key to a sleek, modern robot labeled "Hybrid Authentication." They are standing on a path leading from a quaint town labeled "SFRA" to a futuristic city skyline, under a bright, sunny sky.
The transition to the future of authentication, as the classic "plugin_slas cartridge" passes the key to newest "Hybrid Authentication."

Recognising the limitations of the cartridge-based approach, Salesforce went back to the drawing board and engineered a proper, strategic solution. Released with B2C Commerce version 25.3, Hybrid Authentication is not merely an update; it is a fundamental architectural evolution.

What is Hybrid Auth, Really? It's Not Just a Cartridge-ectomy

Hybrid Authentication is best understood as a platform-level session synchronisation engine. It completely replaces the plugin_slas cartridge by moving the entire logic of keeping the SFRA/SiteGenesis dwsid and the headless SLAS JWT is synced directly into the core B2C Commerce platform. 

This isn’t a patch or a workaround; it’s a native feature. The complex dance of bridging sessions is no longer the responsibility of a fragile, API-hungry cartridge but is now handled automatically and efficiently by the platform itself.

The Promised Land: Core Benefits of Going Native

For developers and architects, migrating to Hybrid Auth translates into tangible, immediate benefits that directly address the pain points of the past.

  • Platform-Native Data Synchronisation: The session bridging process is now an intrinsic part of the platform’s authentication flow. This means no more writing, debugging, or maintaining custom session bridging code. It simply works out of the box, managed and maintained by Salesforce.

  • A Seamless Shopper Experience: By eliminating the clunky, multi-call process of the old cartridge, the platform ensures that session state is synchronised far more reliably and with significantly less latency. The nightmare scenario of a shopper losing their session or basket when moving between a PWA Kit page and an SFRA page is effectively neutralised. This seamlessness extends beyond just the session, automatically synchronising Shopper Context data and “Do Not Track” (DNT) preferences between the two environments.

  • Full Support for All Templates: Hybrid Authentication is a first-class citizen for both SFRA and, crucially, the older SiteGenesis architecture. This provides a fully supported, productized, and stable path toward a composable future for all B2C Commerce customers, regardless of their current storefront template.

Is The Promised Land Free of Danger?

As with any new feature or solution, early adoption often means less community support initially, and you may encounter unique issues as one of the first partners or customers.

Therefore, it’s essential to review all available documentation and thoroughly test various scenarios in testing environments, such as a sandbox or development environment, before deploying to production.

Hardening Your Security Posture for 2025 and Beyond

The security landscape for web authentication is constantly evolving. The migration to Hybrid Auth presents a perfect opportunity to not only simplify your architecture but also to modernise your security posture and ensure compliance with the latest standards.

The 90-Day Session: A Convenience or a Liability?

While this extended duration is highly convenient for users on trusted personal devices, such as mobile apps, it remains a significant security liability on shared or public computers. If a user authenticates on a library computer, their account and personal data could be exposed for up to three months. 

The power to configure this timeout lies within your SLAS client’s token policy. It is strongly recommended that development, security, and legal teams collaborate to define a session duration that strikes an appropriate balance between user convenience and risk. For most web-based storefronts, a much shorter duration, such as 1 to 7 days, is a more prudent and secure choice.

Modern SLAS Security Mandates You Can't Ignore

Since the plugin_slas cartridge was first introduced, Salesforce has rolled out several security enhancements that are now effectively mandatory. Failing to address them during your migration will result in a broken or insecure implementation.

  • Enforcing Refresh Token Rotation: This is a major change, aligning with the OAuth 2.1 security specification. For public clients, which include most PWA Kit storefronts, SLAS now prohibits the reuse of a refresh token. When an application uses a refresh token to get a new access token, the response will contain a new refresh token. The application must store and use this new refresh token for subsequent refreshes. Attempting to reuse an old refresh token will result in a 400 'Invalid Refresh Token' error. The plugin_slas cartridge had to be updated to version 7.4.1 to support this, and any custom headless frontend must be updated to handle this rotation logic.  

  • Stricter Realm Validation: To enhance security and prevent misconfiguration, SCAPI requests now undergo stricter validation to ensure the realm ID in the request matches the assigned short code for that realm. A mismatch will result in a 404 Not Found error.

  • Choosing the Right Client: Public vs. Private: The fundamental rule of OAuth 2.0 remains paramount. If your application cannot guarantee the confidentiality of a client secret (e.g., a client-side single-page application or a native mobile app), you must use a public client. If the secret can be securely stored on a server (e.g., in a traditional web app or a Backend-for-Frontend architecture), you should use a private client.

Because the migration to Hybrid Auth requires touching authentication code on both the SFCC backend and the headless frontend, it is the ideal and necessary time to conduct a full security audit. The migration project’s scope must include updating your implementation to meet these new, stricter standards.

Conclusion: Be the Rhino, Not the Dodo

Migrating from the plugin_slas cartridge to native Hybrid Authentication is not just a simple version bump or a minor refactor; it is a strategic architectural upgrade. It’s an opportunity to pay down significant technical debt, reclaim lost performance, eliminate API quota anxiety, and dramatically simplify your hybrid architecture. 

This shift is a clear signal of Salesforce’s commitment to making the composable and hybrid developer experience more robust, stable, and platform-native. By embracing foundational platform features, such as Hybrid Authentication, over temporary, bolt-on cartridges, you are actively future-proofing your implementation and aligning with the platform’s strategic direction. Don’t let your hybrid architecture become a relic held together by legacy code. 

Be the rhino: charge head-first through the complexity and build on the stronger foundation the platform now provides.

The post Session Sync Showdown: From plugin_slas to Native Hybrid Auth in SFRA and SiteGenesis appeared first on The Rhino Inquisitor.

]]>
https://www.rhino-inquisitor.com/slas-in-sfra-or-sitegenesis/feed/ 1
The Ultimate SFCC Guide to Finding Your POD Number https://www.rhino-inquisitor.com/the-sfcc-guide-to-finding-pod-numbers/ Mon, 21 Jul 2025 05:05:51 +0000 https://www.rhino-inquisitor.com/?p=13311 Knowing your POD number isn't just trivia; it's a critical piece of operational intelligence. It's the key to configuring firewalls, anticipating maintenance and troubleshooting effectively.

The post The Ultimate SFCC Guide to Finding Your POD Number appeared first on The Rhino Inquisitor.

]]>

As a Salesforce B2C Commerce Cloud developer, you operate within a sophisticated, multi-tenant cloud architecture. While Salesforce masterfully handles the underlying infrastructure, there are times when you need to peek behind the curtain. One of the most common—and often surprisingly elusive—pieces of information you’ll need is your instance’s POD number.

Knowing your POD number isn’t just trivia; it’s a critical piece of operational intelligence. It’s the key to configuring firewalls, anticipating maintenance, troubleshooting effectively, and optimising performance. This guide is your definitive resource for uncovering that number. We’ll explore every method available, through clever UI tricks, so you can master your environment.

What is an SFCC POD, and Why Should You Care?

Before we dive into the “how,” let’s establish the “what” and “why.” In the Salesforce B2C Commerce ecosystem, a POD (Point of Delivery) is not just a single server. It is a complete, self-contained infrastructure cluster hosting the multi-tenant Software as a Service (SaaS) application. Think of it as a group of hardware—including firewalls, load balancers, application servers, and storage systems—that multiple customers share. Salesforce manages this grid, continually adding new PODs and refurbishing existing ones to balance loads, enhance performance, and improve disaster recovery capabilities.

This SaaS model is a significant advantage, enabling your team to focus on building exceptional storefronts instead of managing hardware. 

Salesforce also frequently performs “POD moves,” migrating entire customer realms to new hardware to ensure performance and reliability. By treating the POD as a transient, infrastructure-level detail rather than a permanent, customer-facing setting, Salesforce maintains the flexibility to manage the grid without requiring constant configuration changes on your end.

This means that for developers, finding the POD number is an act of reconnaissance. We must learn how to query the system’s current state. Here’s why this knowledge is indispensable:

  • Firewall & Integration Configuration: This is the most frequent reason you’ll need your POD number. When setting up integrations with third-party systems, such as payment gateways, Order Management Systems (OMS), or tax providers, their security policies often require you to allowlist the outbound IP addresses from your SFCC instances. These IP addresses are specific to the POD on which your realm resides. For a seamless transition during a potential POD move, it is best practice to allowlist the IPs for both your current POD and its designated Disaster Recovery (DR) POD at all times. (We’ll explain where to find those later)

  • Understanding Maintenance Schedules: Salesforce announces maintenance windows and incidents on its Trust site on a per-POD basis. Knowing your POD number is the only way to accurately anticipate downtime for your Primary Instance Group (PIG), allowing you to plan releases and testing cycles effectively.

  • Troubleshooting & Support: When diagnosing elusive connectivity issues, performance degradation, or other strange behaviour, knowing the POD is a crucial data point. It’s one of the first things you should check, and it’s vital information to include when opening a support case with Salesforce to expedite a resolution.

  • Performance Optimisation: In the modern era of composable storefronts, performance is paramount. For sites built with the PWA Kit and Managed Runtime, deploying your Progressive Web App (PWA) to a Managed Runtime region that is geographically close to your data’s POD is critical for minimising latency and delivering the fast page loads that customers expect.

The Shift to Hyperforce: What It Means for PODs

Salesforce is fundamentally changing its infrastructure by migrating B2C Commerce Cloud to Hyperforce, its next-generation platform built on public cloud technology. This strategic move away from traditional Salesforce-managed data centres allows for greater scalability, enhanced security, and improved performance by leveraging the global reach of public cloud providers. For anyone working with SFCC, understanding this transition is crucial, as it marks a significant evolution in how the platform is architected and managed. The core takeaway is that the classic concept of a static, identifiable POD is becoming a thing of the past for realms on Hyperforce.

With the adoption of Hyperforce, the architecture is far more dynamic. Your SFCC instance is no longer tied to a single, fixed data centre or a specific POD number that can be easily identified through a URL or IP address lookup. This means that many of the clever methods currently used to pinpoint your POD will no longer be reliable once your realm is migrated.

Instead of a predictable POD, your instance operates within a more fluid public cloud environment.

The UI Sleuth: Finding Your POD with a Few Clicks

For those times when you need a quick answer, this browser-based methods are is your best friends. (Yes, we went from plural to singular)

Method 1: The Custom Maintenance Page Trick

This is a clever, indirect method that leverages the way Business Manager generates preview links. It’s highly reliable for determining the POD of your PIG instances (Development, Staging, Production).

    1. Log in to the Business Manager of the instance you want to investigate.

    2. Navigate to Administration > Site Development > Custom Maintenance Pages.

    3. In the Preview section, you will see links for your various storefronts. If you don’t have a maintenance page uploaded, you must upload one first. You can download a template from this same page and create a simple .zip file to enable the preview links.

    4. Locate the (Production) link.

    5. Do not click the link. Instead, hover your mouse cursor over it.

    6. Look at your browser’s status bar (usually in the bottom-left corner). It will display the destination URL, and within that URL, you will find the POD number.

      For example, the URL might look something like https://pod185.production.demandware.net/..., clearly indicating you are on POD 185.

Method 2: The (lightning) PIG instance footer

By far the easiest and quickest option to explain.

Go to your staging, development, or production instance, log in, and finally look at the bottom right of any page to see the POD number in the footer!

The Account Manager Prerequisite

While you cannot find the POD number directly in Account Manager, it is the source for prerequisite information you will need for other methods, particularly when contacting support. Users with the Account Administrator role are the only ones who can access this information. 

To find your Realm and Organization IDs:

  1. Log in to Account Manager at https://account.demandware.com.

  2. Navigate to the Organization tab.

  3. Open your organization and in the Assigned Realms section, you can find your 4-letter Group ID and the alphanumeric Realm ID.  

Keep this information handy. It’s essential for identifying your environment when interacting with Salesforce systems and support teams.

Method 3: The Legacy Log Center URL (A History Lesson)

This method is now largely historical (migrated in 2023), but it remains important for context, especially if you work on older projects or encounter references to it in internal documentation.

Before the 2023 migration to a centralised logging platform, each POD had a dedicated Log Center application. The URL format explicitly included the POD number :

https://logcenter-<POD-No.><Cylinder>-hippo.demandware.net/logcenter

The <Cylinder> value was also significant: 00 for a SIG (your sandboxes) and 01 for a PIG (Dev, Staging, Prod). 

The platform’s evolution toward a more abstracted, public cloud infrastructure is evident in this instance. The old Log Center URL was tied directly to a specific hardware group (hippo.demandware.net), reflecting a more rigid infrastructure. 

The new, centralised Log Centre decouples logging from the specific POD where an instance runs, using regional endpoints instead (e.g., AMER, EU, APAC). This shift is a classic pattern in modern cloud services, favouring centralised, scalable functions over hardware-specific endpoints.

Although this legacy URL is no longer a reliable method for active discovery, understanding its history offers insight into the platform’s architectural evolution.  

The Official Channels: Guaranteed but Less Immediate

A friendly rhino in a 2D flat cartoon style, similar to Salesforce illustrations, walks towards an official building with a cloud logo, representing going to official channels for trusted information.
On the right path: Getting information from the official source.

When you need an officially sanctioned answer or want to monitor the health of your environment, these are the channels to use.

Method 4: Consulting Salesforce Support (The Ultimate Fallback)

This is your most authoritative source. Salesforce Support can provide all realm information, including the current POD number. This is the best route to take when other methods are inconclusive or when you need an official record for compliance or audit purposes. To make the process efficient, open a support case and provide your Organization ID and Realm ID from the outset. 

Support will also be the primary source of information during a planned POD move.

Using the Salesforce Trust Site (For Monitoring, Not Discovery)

A common misconception is that the Salesforce Trust site can be used to find your POD (Point of Delivery) number. 

This is incorrect. 

The Trust site is where you go to check the status of a POD you already know. Once you’ve identified your POD number using one of the methods above, you can visit https://status.salesforce.com/products/B2C_Commerce_Cloud, find your POD in the list, and subscribe to notifications for maintenance and incidents.

The Official POD Lists

Salesforce maintains official knowledge base articles that list all PODs, their general locations (e.g., USA East – VA, Japan, …), their DR (Disaster Recovery) POD counterparts, and their outgoing IP addresses. These are invaluable reference documents.

You should use these lists in conjunction with the other discovery methods. For example, once the maintenance page URL indicates that you are on POD 126, you can consult the AMER list to find that its location is Virginia, its DR POD is 127, and its primary outbound IP address is 136.146.57.33.

Mastering Your Environment

Knowing how to find your POD number is more than a technical trick. It’s a sign of a developer who understands the platform on a deeper level. It empowers you to configure integrations with confidence, anticipate operational changes, and troubleshoot with precision.

The post The Ultimate SFCC Guide to Finding Your POD Number appeared first on The Rhino Inquisitor.

]]>
Image-ine: Salesforce B2C Commerce Cloud DIS for Developers https://www.rhino-inquisitor.com/image-ine-sfcc-dis-for-developers/ Mon, 14 Jul 2025 06:44:24 +0000 https://www.rhino-inquisitor.com/?p=13242 in the wild, wild west of e-commerce, images aren't just pretty pictures. They're your silent sales force, your conversion catalysts, and your SEO superheroes. Shoddy, slow-loading visuals? That's a one-way ticket to "bounce rate hell" and a brand image that screams "we tried." But fear not, intrepid developer! Salesforce B2C Commerce Cloud's Dynamic Image Service (DIS) is here to rescue your storefront from visual mediocrity and transform it into a high-octane, pixel-perfect masterpiece.

The post Image-ine: Salesforce B2C Commerce Cloud DIS for Developers appeared first on The Rhino Inquisitor.

]]>

In the wild, wild west of e-commerce, images aren’t just pretty pictures. They’re your silent “sales force” (☺), your conversion catalysts, and your SEO superheroes. Shoddy, slow-loading visuals? That’s a one-way ticket to “bounce rate hell” and a brand image that screams, “We tried.”

But fear not. Salesforce B2C Commerce Cloud’s Dynamic Image Service (DIS) is here to help. Keep in mind that this built-in tool has several tricks up its sleeve, but might not always be the best fit for your project, so keep reading! 

So, what exactly is DIS magic?

Imagine a world where you upload one glorious, high-resolution image, and then, poof!—it magically transforms into every size, shape, and format your storefront could ever dream of, all on the fly.
That, my friends, is the core enchantment of Salesforce B2C Commerce Cloud’s Dynamic Imaging Service (DIS). It’s designed to eliminate the nightmare of manually resizing, cropping, and uploading numerous image variants for every product view.

Instead of a digital assembly line of pre-processed images, DIS acts like a master chef. You provide it with the finest ingredients (your single, high-res source image), and when a customer’s browser requests a specific dish—say, a tiny thumbnail for a search result or a sprawling, detailed shot for a product page—DIS delivers it instantly. No waiting, no fuss – just the right-sized image, served hot and fresh. 

And you, the developer, are the culinary artist! DIS hands you a robust toolkit of transformation parameters, giving you pixel-level control. Want to resize? scaleWidth or scaleHeight are your pals. Need to snip out a specific detail? cropX, cropY, cropWidth, and cropHeight are your precision scissors (remember, you need all four for the magic to happen!). Fancy a different file type? format lets you switch between gif, jp2, jpg, jpeg, jxr, and png from a smorgasbord of source formats, including tif and tiff

Ever wanted to add a “SALE!” image badge to an image without using Photoshop?  imageX, imageY, and imageURI are your go-to options for the overlay. Though honestly, why not just use CSS for this, right?

And for that perfect balance between crispness and speed, quality lets you fine-tune compression for JPG(1-100, default 80) and PNGs. Even pesky transparent backgrounds can be tamed with bgcolor, and metadata stripped with strip

Want to know precisely how all of these things work? Have a look at the official documentation.

Why You Should Be Best Friends with DIS

A cartoon illustration of a developer shaking hands with a friendly, anthropomorphic cloud icon, as small, optimized images happily flow between them towards an e-commerce storefront. This symbolizes a strong, collaborative, and efficient relationship with the Dynamic Image Service.
Best Friends with DIS: Seamless Image Optimization

For developers navigating the Salesforce B2C Commerce Cloud universe, DIS isn’t just a nice-to-have; it’s a game-changer that simplifies your life and turbocharges your storefront.

Kiss Manual Image Management Goodbye: Seriously, who has time to create 10 different versions of the same product shot? With DIS, you upload one glorious, high-resolution image to Commerce Cloud, and DIS handles the rest, generating every size and format on demand. This means your creative and merchandising teams can focus on crafting stunning visuals, not on tedious, repetitive image grunt work. More creativity, less clicking!  

Speed Demon & Responsive Rockstar: In the e-commerce race, speed wins. DIS helps you cross the finish line first by serving up images that are just right for every scenario. No oversized behemoths slow down your product pages, and no pixelated thumbnails ruin your search results. This precision means faster page loads, which directly translates into happier customers, improved SEO, and ultimately, more conversions. Plus, DIS is your built-in responsive design partner, ensuring your storefront looks sharp and loads lightning-fast on any device, from desktops to smartphones. As I’ve discussed in my blog post, From Lag to Riches: A PWA Kit Developer’s Guide to Storefront Speed, performance is paramount. 

Flexibility That’ll Make You Giddy: Ever had a designer suddenly decide to change the entire product grid layout? From four items at 150×150 pixels to three at 250×250? Without DIS, that’s a full-blown panic attack. With DIS? You tweak a few parameters in your templates, and bam!—new layout, perfectly sized images, no re-processing, no re-uploading, no re-assigning. Do you need a new promotional banner with a custom image size for a flash sale? Generate it instantly! (Ok…Ok, I might be a bit too optimistic here, some foresight and extra editor fields in Page Designer are needed for use-cases like this.)

This adaptability is pure gold. And here’s the cherry on top: by using the official Script API for URL generation, your image URLs are future-proofed. Salesforce can change its internal plumbing all it wants; your code remains rock-solid, reducing technical debt and maintenance headaches. 

				
					URLUtils.imageURL( '/<static image path>/image.png', { scaleWidth: 100, format: 'jpg' } );
				
			

Caching Like a Boss (and CDN’s Best Friend): DIS isn’t just dynamic; it’s smart. It caches (limited) transformations to deliver images at warp speed. If your Commerce Cloud instance is hooked up to a Content Delivery Network (newsflash: it is -> the eCDN), the CDN helps optimise caching as well (through TTL headers). 

When you update an image, there’s no need for manual cache invalidation thanks to a technique known as URL fingerprinting/asset fingerprinting. Instead of just replacing the old file, the platform creates a new URL for the updated image, often by adding a unique identifier (a “fingerprint”). Because the URL has changed, it forces browsers and the eCDN to download the new version as if it were a completely new file, bypassing the old cached version.

				
					/dw/image/v2/BCQR_PRD/on/demandware.static/-/Sites-master/default/dw515e574c/4.jpg
				
			

Do you notice that dw515e574c? It represents the unique “cache” ID managed by SFCC to ensure cached images are served. When the image updates, a new ID is generated so the customer always sees the latest version!

DIS Tips, Tricks, and How to Avoid Digital Disasters

To truly master DIS and avoid any “why isn’t this working?!” moments, keep these developer commandments in mind.

Embrace the Script API (Seriously, Do It!)

We can’t stress this enough: use the URLUtils and MediaFile Script API classes for generating your DIS URLs. 

It’s the official, validated, and future-proof way to do it. Here’s a little snippet to get you started:   

 
				
					var product = // obtain your product object
var thumbnailImage = product.getImage('thumbnail', 0);

if (thumbnailImage) {
    var imageUrl = thumbnailImage.getImageURL({
        scaleWidth: 100,
        format: 'jpg',
        quality: 85
    });
    // The 'imageUrl' variable now holds the dynamically generated URL
}
				
			

Know Your Image Limits (and How to Work Around Them)

Even superheroes have weaknesses. DIS has a few, and knowing them is half the battle:

  • Source Image Quality: Always upload the largest, most beautiful, and highest-quality images you’ve. DIS is a master at shrinking and optimising, but it can’t create pixels out of thin air (It’s not an AI solution)!  

  • Size Matters (A Lot): This is a big one. Images over 6MB in file size or larger than 3000×3000 pixels? DIS will politely decline to transform them and serve them up in their original, unoptimized glory. The first time you request an oversized image, you may encounter an error; however, subsequent requests typically proceed without issue. The takeaway? Keep your source images just under these limits (think 5.9MB or 2999×2999 pixels) to ensure DIS always works its magic.

    NoteOne source states a 10MB limit in the documentation, but to be cautious, always follow the 6MB limit.

  • Transformation Timeout: DIS has a 29-second deadline. If a transformation is super complex (especially on animated GIFs, where every frame needs processing), it might time out, giving you a dreaded 408 error. If you hit this, simplify your transformations or pre-process those extra-fancy assets. 

  • Cropping’s Four Musketeers: If you’re cropping, remember cropX, cropY, cropWidth, and cropHeight are a package deal. All four must be present, or no cropping happens!   

Transform DIS PNG to JPG

When it comes to image formats, transforming PNG files to JPEG using the SFCC Dynamic Image Service can be a game-changer, especially when you don’t need those transparent backgrounds. This simple trick alone can significantly reduce file sizes, leading to faster page loads and a smoother user experience.

Here’s how you might implement this in a controller:

				
					'use strict';

var server = require('server');
var ProductMgr = require('dw/catalog/ProductMgr');
var URLUtils = require('dw/web/URLUtils');

/**
 * @name Product-ImageExample
 * @function
 * @memberof Product
 * @description A controller endpoint that demonstrates the correct way to generate
 * a transformed image URL for a given product.
 */
server.get('ProductImageExample', function (req, res, next) {
    // 1. Retrieve the product object using the Product Manager.
    // The product ID should be passed as a query string parameter, e.g.,?pid=12345
    var product = ProductMgr.getProduct(req.querystring.pid);
    var imageUrl = ''; // Initialize a default empty URL.

    // 2. Check if the product and its image exist before proceeding.
    if (product) {
        // 3. Get the MediaFile object for the 'large' view type.
        var productImage = product.getImage('large', 0);

        if (productImage) {
            // 4. Generate the transformed URL using getImageURL() on the MediaFile object.
            // Here, we convert a PNG to a JPG and specify a white background.
            imageUrl = productImage.getImageURL({
                'format': 'jpg',
                'bgcolor': 'ffffff' // Use a 6-digit hex code for the color.
            }).toString(); // Convert the URL object to a string for the template.
        }
    }
    
    // 5. Render a template, passing the generated URL to be used in an <img> tag.
    res.render('product/productimage', {
        productImageURL: imageUrl
    });

    // It is standard practice to call next() at the end of a middleware function.
    next();
});

// Export the controller module.
module.exports = server.exports();
				
			

General Image Zen for Speed and Quality

DIS is powerful, but don’t forget the fundamentals of image optimisation:

  • Responsive Images (srcset & sizes): These attributes are your best friends for letting browsers pick the perfect image resolution for a user’s device and viewport. Less data, faster loads! 

  • Prevent Layout Jumps (CLS): Always specify the width and height attributes for your images. This reserves space, preventing annoying layout shifts that make your site feel janky and hurt your Core Web Vitals.   

  • Pre-Compress (Gently): While DIS handles quality, a little pre-compression on your source images (especially removing unnecessary metadata) can reduce file size by up to 30% without compromising visual quality. 

  • Leverage the CDN: DIS already plays nicely with Salesforce’s Content Delivery Network. This means your images are cached and delivered from servers closer to your global audience, making them appear almost instantly.  

Troubleshooting: When Things Go Sideways

  • “My image isn’t transforming!” First suspect: file size or dimensions. Check those 6MB and 3000×3000 pixel limits. 

  • “408 Timeout Error!” If you’re seeing this, especially with animated GIFs or huge images that undergo numerous transformations, you’re approaching the 29-second limit. Simplify or pre-process.  

  • General Sluggishness: Remember, images are just one piece of the performance puzzle. If your storefront is still slow, look for other potential culprits, such as poorly optimised custom code, complex reports, or inefficient API calls. Regular code audits are your friend!  

When Not to Use It (Or When to Be Extra Careful)

A cartoon illustration depicting a massive traffic jam of oversized, unoptimized images attempting to enter a cloud icon, which appears overwhelmed and unable to process the volume. The images are backed up on a road leading to the cloud, symbolizing a system bottleneck or overload.
Image Overload: When Your Service Gets Jammed

While DIS is a superhero, even superheroes have their kryptonite. There are a few scenarios where DIS might not be your go-to, or where you need to tread with extra caution:

  • When Your Images Are Absolute Giants: Remember those 6MB file sizes and 3000×3000 pixel dimension limits? If your source images consistently blow past these thresholds, DIS won’t transform them. Instead, they’ll be served in their original, unoptimized glory. This results in slower load times and a subpar user experience, particularly on mobile devices. For truly massive, high-fidelity assets (think ultra-high-res hero banners or interactive 360-degree product views that require large file sizes), you may need to consider specialised external image services or alternative hosting solutions that can handle and optimise such large files, or simply serve the original if the performance impact is minimal.

  • For Super Complex, “Expensive” Transformations: DIS has a 29-second timeout for transformations. If you’re trying to perform multiple, intricate operations on a very large image, or especially on animated GIFs (where every single frame needs processing), you may encounter this wall and receive a 408 timeout error. If your use case demands such complex, real-time transformations, you might need to pre-process these assets offline or explore dedicated, more powerful image processing platforms designed for extreme computational demands.

  • When Images Aren’t Hosted on Commerce Cloud Digital: DIS only works its magic on images that are stored within your Commerce Cloud Digital environment. If your images are hosted externally (e.g., on a third-party DAM or a different CDN not integrated with Commerce Cloud’s asset management), DIS won’t be able to touch them. In such cases, you’d rely on the capabilities of your external hosting solution for image optimisation.

  • For Very Simple, Static Images with No Transformation Needs: If you have a tiny, static icon or a simple logo that never changes size, format, or quality, and you don’t anticipate any future dynamic needs, the overhead of routing it through DIS might be overkill. While DIS is designed for flexibility, for truly unchanging, small, and already optimised assets, direct static hosting might be marginally simpler, though the benefits of DIS’s caching and CDN integration often outweigh this. However, given the “future-proofing” aspect, it’s generally still a good idea to use DIS for consistency.

  • You Need More Modern Features: If you’ve been in the SFCC space for some time, you’ve likely noticed that little has changed regarding image resizing and format support over the years, although formats like WebP are managed by the eCDN. For those seeking the newest formats like AVIF, you’ll need to look elsewhere at this time.

    Note: The WebP transformation is handled by the eCDN, specifically through its configuration feature known as “the image Polish options,” rather than by the DIS.

A cartoon illustration showing a fork in the road. One path leads to a cloud labeled "Dynamic Image Service (DIS)," and the other, larger path, leads to icons representing a "Third Party CDN/DAM" and "Digital Asset Management System." A developer character is pointing towards the CDN/DAM path, indicating a choice for image management solutions.
Deciding between Salesforce's native DIS and external CDN/DAM solutions often comes down to specific project needs and existing infrastructure.

Is it still useful for PWA Kit projects? (Spoiler: YES, and here's why!)

Absolutely, unequivocally, 100% YES! DIS isn’t just relevant for PWA Kit projects; it’s arguably more crucial. Modern headless storefronts, like those built with PWA Kit, thrive on speed, flexibility, and that buttery-smooth, app-like user experience. 

Dynamic image transformation is practically a prerequisite for achieving that.

Page Designer's Best Friend & Product Image Powerhouse

DIS integrates beautifully with Page Designer within PWA Kit. Page Designer, for the uninitiated, is Business Manager’s visual editor, which allows marketers to build dynamic, responsive pages without writing a single line of code (well, at least once all the components are developed 😇). 

Where do your product images live? In Commerce Cloud, of course! Which means DIS is the star player for serving them up. Page Designer components can then tap into DIS to display product images, content assets, or any other visual element, ensuring they’re perfectly optimised for whatever device your customer is using.   

The DynamicImage Component: Your PWA Kit Sidekick

PWA Kit even has a dedicated DynamicImage component that makes integrating with DIS a breeze. This component is designed to handle image transformations by mapping an array of widths to the correct sizes and srcset attributes, simplifying responsive image strategies directly within your React components.  

				
					<DynamicImage
    src={`${heroImage.disBaseLink || heroImage.link}[?sw={width}&q=60]`}
    widths={{
        base: '100vw',
        lg: heroImageMaxWidth
    }}
    imageProps={{
        alt: heroImage.alt,
        loading: loadingStrategy
    }}
/>
				
			

The post Image-ine: Salesforce B2C Commerce Cloud DIS for Developers appeared first on The Rhino Inquisitor.

]]>
AI Won’t Steal Your SFCC Job, But a Developer Using AI Will: The Rhino Inquisitor’s Survival Guide https://www.rhino-inquisitor.com/ai-wont-steal-your-sfcc-job-but-a-developer-using-ai-will/ Mon, 30 Jun 2025 17:46:53 +0000 https://www.rhino-inquisitor.com/?p=13098 Let’s cut to the chase. The whispers in every virtual stand-up, the subtext of every tech keynote, the existential dread creeping into your late-night coding sessions—it all boils down to one question: Is Artificial Intelligence coming for your job? As a Salesforce Commerce Cloud developer, you’re standing at the intersection of a specialized, high-stakes platform and the most disruptive technological wave of our generation. The anxiety is palpable, and it’s not unfounded.

The post AI Won’t Steal Your SFCC Job, But a Developer Using AI Will: The Rhino Inquisitor’s Survival Guide appeared first on The Rhino Inquisitor.

]]>

The Elephant (or Rhino) in the Room: Staring Down the AI Hype

Let’s cut to the chase. The whispers in every virtual stand-up, the subtext of every tech keynote, the existential dread creeping into your late-night coding sessions—it all boils down to one question: Is Artificial Intelligence coming for your job? As a Salesforce Commerce Cloud developer, you’re standing at the intersection of a specialised, high-stakes platform and the most disruptive technological wave of our generation. 

The anxiety is palpable, and it’s not unfounded.

But the job of the Rhino Inquisitor is to charge head-first through the fog of fear and hype to uncover the hard, practical truth. So here it is: No, AI is not going to make you obsolete. However, it will, unequivocally, and without mercy, render developers who refuse to adapt obsolete. The threat isn’t the algorithm. It’s atrophy.

This isn’t some far-off future. The shift is already here. The 2024 DORA Report reveals that 76% of developers are already using AI-powered tools in their daily work. A GitHub survey from the same year found that a staggering 97% of developers have used generative AI platforms. 

This is no longer an experimental niche… It’s a rapidly adopted standard.

Businesses are (or will go) all-in, with 78% of organisations reporting AI usage in 2024, a massive jump from 55% the previous year. The data is clear: AI is being integrated into the software development lifecycle at a breathtaking pace, promising boosts in productivity, code quality, and even developer focus.

However, the most dangerous misconception is that simply using AI to write code faster automatically translates to greater value. This brings us to a critical, non-obvious threat that developers must understand: the “Vacuum Hypothesis.” 
Introduced in the DORA Report, this concept tells us that the time developers save by using AI is often immediately absorbed by lower-value activities, such as endless meetings, bureaucratic red tape, and context-switching between trivial tasks.

Consider this scenario: you use GitHub Copilot to generate a controller with helpers and its test class in 30 minutes, a task that previously took you 90. You’ve just saved an hour. But what happens to that hour? In many organisations, it evaporates into a vacuum of inefficiency. It’s consumed by an extra status update meeting, a flurry of low-priority Slack messages, or simply waiting for a manual, bottlenecked deployment process to inch forward. The micro-level productivity gain is completely nullified by macro-level organisational drag.

This reveals a more profound truth. The most successful developers in this new era won’t just be the ones who master AI tools. They will be the ones who leverage the productivity gains from those tools to focus on high-value work that AI cannot do: architecting complex, scalable systems, mentoring junior developers, collaborating with business stakeholders to solve the right problems, and championing the process improvements needed to ensure that saved time is reinvested, not wasted. 

The challenge is as much about changing your organisation’s culture as it is about changing your own code editor.

Déjà Vu All Over Again: A Brief History of Developer "Extinction Events"

The fear that a new technology will render developers obsolete is a story as old as the profession itself (and not just the developer profession). Every major technological leap has been met with predictions of our imminent demise. Yet, each time, the opposite has happened. These “extinction events” were actually elevation events. 

They were moments of abstraction that, rather than replacing developers, freed them from tedious, low-level tasks to tackle problems of ever-increasing complexity and scale. AI is simply the latest, and most powerful, chapter in this long-running story.

The Compiler Revolution (1950s-1960s): From Machine Whisperer to System Architect

In the pioneering days of computing, programming was a tedious and painstaking process. Developers wrote instructions directly in binary or low-level assembly code, a process that required an intimate, almost mystical, understanding of the machine’s hardware. Then came the compiler. Tools like FORTRAN and COBOL introduced high-level languages that allowed programmers to write in a more human-readable syntax. The compiler would then automate the translation of this code into the ones and zeros the machine understood.

The “threat” was obvious: what would happen to the programmers who had spent years mastering the intricacies of machine code? Would this automation make them redundant?

The reality was transformative. The compiler abstracted away the hardware, freeing developers from the tyranny of the machine. This single innovation gave birth to the entire discipline of software engineering. Instead of focusing on managing memory registers, developers could now focus on designing algorithms, data structures, and complex application logic. The scope of what was possible exploded. 

The programmer evolved from a machine whisperer into a system architect.

The IDE Takeover (1990s-2000s): From Code Typist to Supercharged Problem-Solver

For decades, a developer’s toolkit was a fragmented collection of disparate programs: a text editor, a separate compiler, a command-line debugger, and build scripts. Then came the Integrated Development Environment (IDE), which bundled all these tools into a single, cohesive application. 

The “threat” was one of deskilling. With features like syntax highlighting, intelligent code completion, one-click debugging, and integrated version control, the IDE automated dozens of small, manual tasks that were once the hallmark of a seasoned developer’s workflow. Would this “dumbing down” of the process reduce the value of experienced programmers?  

The reality was a massive leap in productivity. Studies have shown that IDEs can boost developer productivity by up to 30% and significantly reduce debugging time. By abstracting the workflow, the IDE allowed developers to navigate and manage enormous, complex codebases with unprecedented ease. This efficiency gain was essential for building the large-scale, distributed web applications that came to define the internet age. The developer was no longer just a code typist; they were a supercharged problem-solver, wielding a powerful, integrated toolset to build more, faster.  

A timeline showing the evolution of the software developer role in four stages. It starts in the 1950s with a developer and a mainframe, progresses to the 1990s with an IDE on a desktop, then to the 2000s with Agile team collaboration, and ends today with a rhino developer augmented by AI robot assistants.
And it's not just developers who experience this - every industry has its own story.

The Agile & DevOps Movement (2000s-2010s): From Siloed Coder to Value Stream Owner

The final major shift came not from a tool, but from a philosophy. The Waterfall model, with its rigid, sequential phases, was too slow and inflexible for the fast-paced world of web software. The Agile and DevOps movements proposed a new way of working, emphasising iterative development, cross-functional collaboration, and the automation of the entire release pipeline through practices like Continuous Integration and Continuous Delivery (CI/CD).  

The “threat” was a blurring of roles. If deployment and operations were automated, what was the primary function of the developer?

The reality was an expansion of responsibility. The developer’s role grew to encompass the entire software lifecycle, from ideation and coding to deployment, monitoring, and maintenance. They were no longer just writing code in a silo; they were owners of a value stream, responsible for delivering tangible business outcomes quickly and reliably. 

This history reveals an undeniable pattern: every wave of automation and abstraction has elevated the role of the developer, pushing them to operate at a higher level of strategic thinking. 

However, there is one crucial difference this time around. 

The pace of change is accelerating at an exponential rate. The transition from machine code to high-level languages took the better part of a decade. The widespread adoption of generative AI coding assistants has occurred in less than three years. 

This compressed timeline means that the ability to learn and adapt is no longer just a valuable trait; it is the single most critical survival skill. A “wait and see” approach is a guaranteed strategy for obsolescence. The skill gap between those who adopt these tools and those who do not will widen more rapidly than in any previous technological shift. 

The AI-Augmented Rhino: Your SFCC Developer Arsenal in 2025

Theory and history are comforting, but survival requires a practical arsenal. For the SFCC developer, this means moving beyond abstract notions of “using AI” and mastering a specific set of tools and techniques. This is how you transform from a potential victim of disruption into an AI-augmented rhino, capable of charging through complexity and delivering value at an unprecedented speed.

Your AI Pair Programmer: Code Generation and Assistance

The most immediate and tangible application of AI in our profession as developers, is in the act of writing code.

These tools are not just fancy autocompletes… They are context-aware partners that can drastically reduce the time spent on repetitive, boilerplate tasks.

A cartoon rhino developer at a desk looks thoughtfully at a screen with code. Next to the keyboard, an AI assistant designed like a yellow rubber duck projects a holographic code snippet, illustrating the concept of AI pair programming.
The classic 'rubber duck debugging' method gets a serious upgrade. Explain your problem to an AI that can actually talk back with a solution. (Well, not always - always verify and don’t trust blindly)

GitHub Copilot for SFCC

GitHub Copilot is the de facto (ok … this is debatable – but replace GitHub Copilot with your favourite) standard for AI-assisted coding, and its capabilities extend deep into the ecosystem.

Because it was trained on countless public GitHub repositories, it has a surprisingly robust understanding of React.js, OCAPI and SCAPI structures.

ChatGPT and LLMs for Strategic Development

Large Language Models (LLMs) like ChatGPT, Claude, and Gemini are your strategic partners. Using them effectively requires a new core competency: Prompt Engineering

This is the art and science of crafting instructions that guide the AI to produce the desired output. Key principles include:

  • Role Prompting: Begin your prompt by assigning a persona. “Act as a senior SFCC technical architect with 15 years of experience in high-volume retail.” This frames the AI’s knowledge and response style.

  • Providing Context: Give the AI all the relevant background. Paste in existing code, business requirements, or error messages.

  • Using Delimiters: Clearly separate your instructions from the data you provide using markers like triple backticks (“`) or XML tags.

Armed with these techniques, you can use LLMs for high-level tasks that go far beyond simple code generation : 

  • Architectural Brainstorming: “I need to build a custom ‘Quick Order’ feature on an SFCC PWA Kit store. Provide me with three different technical approaches for comparison: the use of a custom SCAPI endpoint, a server-side SFCC controller with a traditional form post, and a standard SCAPI endpoint. Analyse the pros and cons of each regarding performance, scalability, and development effort.”

  • Legacy Code Archaeology: “Here is a legacy SFCC pipelet script from a SiteGenesis implementation. Explain what it does, identify its inputs and outputs, and highlight potential points of failure or performance bottlenecks.” 

  • Documentation on Demand: “Generate a JSDoc comment block for the following JavaScript method, explaining its parameters, return value, and purpose.”

  • Test Plan Generation: “Create a comprehensive test plan for an e-commerce checkout flow. Include test cases for different payment methods (credit card, PayPal), shipping options, guest vs. registered user checkout, and handling of invalid coupon codes. Here is a description of all the steps in our checkout process: …” 

Your AI QA Engineer: Smarter Testing & Debugging

Debugging and testing are two of the most time-consuming aspects of development, especially within the complex, interconnected systems of SFCC. 

AI is poised to revolutionise this space, acting as a tireless QA engineer that can catch issues before they ever reach a human reviewer.  Tools like Qodo AI (formerly CodiumAI) can analyse your code and automatically generate meaningful unit tests, covering edge cases you might have missed. For debugging, Workik offers context-aware analysis, allowing you to provide error messages and relevant code snippets to receive intelligent, plain-English explanations of the root cause. More advanced tools even allow you to have a conversation with your debugger, asking questions like, “Why is this orderTotal variable null at this point in the execution?”  

For an SFCC developer, this means a future where the soul-crushing task of writing boilerplate test data setup is automated. 

It means pasting a cryptic NullPointerException stack trace into an AI tool and getting back a precise explanation and a suggested fix. 

It means integrating security scanners like SnykCode directly into your IDE to flag vulnerabilities in your custom code in real-time, long before a pull request is ever created. 

This isn’t about replacing QA. It’s about augmenting it, freeing up human testers to focus on complex user experience issues and business logic validation.  

The Evolved Developer: More Than a Coder, a Creator

The rise of AI marks a fundamental shift in the value proposition of a software developer. When the act of writing code—the “how”—is increasingly automated, the most valuable professionals will be those who have mastered the “why.” Your worth will be measured not by your typing speed, but by the quality of your thinking.

A friendly cartoon rhino developer acts as a strategic leader, presenting an architecture plan on a large screen to a diverse group of business stakeholders. In the background, AI robot assistants implement the plan by coding at their desks.
The AI-augmented developer: Spending less time on the keyboard and more time translating business strategy into a technical vision that AI can execute.

The Architect's Mindset: Your Most Valuable Asset

As AI dramatically lowers the barrier to implementation, the relative importance of high-quality system design, clear interface definitions, and robust architectural boundaries skyrockets. A poorly architected system, even if coded flawlessly and instantly by an AI, is still a poorly architected system. It will be brittle, difficult to maintain, and unable to scale. 

The SFCC developer of the future adds value long before the first line of AI-generated code is produced. Your expertise is no longer demonstrated by your ability to write a perfect for loop in ISML script. It is shown in your ability to analyse a business requirement and make critical architectural decisions. 

Can this new requirement be leveraged using out-of-the-box features, or should we write a custom controller that utilises the existing SFRA framework? 

Should this feature use custom objects or custom caches? 

Is it a candidate for a third-party API integration? Or can the business goal be met more effectively by leveraging a native feature?

This is the architect’s mindset. It’s about understanding the entire ecosystem—the platform’s capabilities, the available APIs, the business goals, and the long-term maintenance implications—and charting the most effective course. This strategic thinking is a uniquely human skill that AI, in its current form, cannot replicate.

"Soft" Skills are Now Hard, Non-Negotiable Technical Skills

For too long, the industry has dismissed crucial human-centric abilities as “soft skills,” implying they are secondary to “hard” technical prowess. In the age of AI, this distinction is becoming dangerously obsolete. These skills are now core, non-negotiable competencies for any effective technical professional. 

The logic is straightforward. AI can generate code, but that code can be buggy, inefficient, or insecure. This phenomenon has been dubbed “implementation amnesia,” where developers become dependent on AI suggestions without building a deep mental model of the systems they create. Therefore, a developer needs Critical Thinking to rigorously evaluate, question, and refine the AI’s output. This is not a soft skill – it is a fundamental technical requirement for ensuring quality.   

Similarly, AI requires clear, unambiguous, and context-rich instructions to generate valuable results. Therefore, a developer needs exceptional Communication and Prompting skills to translate complex business requirements into instructions the AI can understand and execute. This is a technical skill of the highest order.  

Ultimately, AI addresses technical issues, but businesses tackle human-centric problems. AI cannot understand a user’s frustration with a clunky checkout process or empathise with a merchant’s need to hit a quarterly sales target. Therefore, a developer needs Collaboration and Empathy to work with stakeholders, understand their actual needs, and define the correct problems for the AI to solve in the first place.  

These are not optional niceties… they are business-critical technical skills that determine whether a project succeeds or fails.

Conclusion: Be the Rhino, Not the Dodo

The history of software development is a history of abstraction. Each new layer, from the compiler to the IDE to the cloud, has eliminated a class of manual labour and, in doing so, has empowered developers to build things that were previously unimaginable. Generative AI is the most profound abstraction layer we have ever witnessed. It is abstracting the very act of writing code itself.

This is not a cause for fear! It is a cause for action. 

It does not make you obsolete; it gives you unprecedented leverage. The developers who face extinction are those who cling to the past, defining their value by the tasks that AI can now do better and faster. They are the dodos of this new era, unable to adapt to a changing environment.

The developers who thrive will be the ones who are like rhinos. They will see AI not as a competitor, but as a powerful partner that frees them from the mundane and empowers them to focus on the work that truly matters: creativity, strategic thinking, complex problem-solving, and human collaboration.

The future of the SFCC developer is not that of a simple coder, but of a technical leader, a system architect, and a strategic problem solver. The path forward is clear, but the window of opportunity to adapt is closing faster than ever before.

Please don’t wait. The time for passive observation is over.

  • Get your hands dirty, now. If you don’t have a GitHub Copilot license, buy one this week. The $10 per month is the single best investment you can make in your career. (For your projects, customer/company code is a bit trickier on the legalities)

  • Experiment relentlessly with prompts. Take a piece of your own code and ask Copilot to refactor it, explain it, or find bugs in it. Learn the language of AI.

The future isn’t something that happens to you; it’s something that you create. It’s something you build. Stop worrying about being replaced. Pick up the tools, sharpen your horn, and become the AI-augmented rhino that leads the charge.

A cartoon rhino developer, dressed as a conductor, leads an orchestra of small robots. The robots sit in sections and use laptops and data interfaces instead of musical instruments, symbolizing a developer orchestrating various AI tools.
our new podium awaits. The AI-augmented developer orchestrates a powerful ensemble of tools, where strategy is the sheet music and business impact is the masterpiece.

The post AI Won’t Steal Your SFCC Job, But a Developer Using AI Will: The Rhino Inquisitor’s Survival Guide appeared first on The Rhino Inquisitor.

]]>
From Lag to Riches: A PWA Kit Developer’s Guide to Storefront Speed https://www.rhino-inquisitor.com/lag-to-riches-a-pwa-kit-developers-guide/ Mon, 23 Jun 2025 17:00:05 +0000 https://www.rhino-inquisitor.com/?p=12990 Let's be honest: a slow e-commerce site is a silent killer of sales. In the world of B2C Commerce, every millisecond is money. As a PWA Kit developer, you're on the front lines of a battle where the prize is customer loyalty and the cost of defeat is a lost shopping cart. Today's shoppers have zero patience for lag. They expect buttery-smooth, app-like experiences, and they'll bounce if you don't deliver.

The post From Lag to Riches: A PWA Kit Developer’s Guide to Storefront Speed appeared first on The Rhino Inquisitor.

]]>

Truth be told: a slow e-commerce site is a silent killer of sales. In the world of B2C Commerce, every millisecond is money. As a PWA Kit developer, you’re on the front lines of a battle where the prize is customer loyalty and the cost of defeat is a lost shopping cart. Today’s shoppers have zero patience for lag. They expect buttery-smooth, app-like experiences, and they’ll bounce if you don’t deliver.

The numbers don’t lie. A one-second delay can reduce conversions by as much as 7%. But flip that around: a tiny 0.1-second improvement can boost conversion rates by a whopping 8% and keep shoppers from abandoning their carts. When you consider that more than half of mobile users will leave a site that takes over three seconds to load, the mission is crystal clear: speed is everything.

So, how do we win this battle? We need the proper intel and the right weapons. That’s where Google’s Core Web Vitals (CWV) and the Chrome User Experience Report (CrUX) come in. These aren’t just abstract numbers; they’re a direct line into how real people experience your storefront. 

This post is your new playbook. We’re going to break down why every millisecond matters and provide you with an actionable roadmap for taking the Composable Storefront to the next level.

Step 1: Know Your Numbers - Getting Real with CrUX and Core Web Vitals

Before you can optimise anything, you need to understand what you’re measuring. Let’s demystify the data that both your users and Google care about, starting with the difference between what happens in a controlled lab and what happens in the messy real world.

Meet CrUX: Your Real-World Report Card

The Chrome User Experience Report (CrUX) is a massive public dataset from Google, packed with real-world metrics from actual Chrome users. It’s the official source for Google’s Web Vitals program and the ultimate ground truth for how your site performs for your visitors.

This data comes from Chrome users who have opted in to syncing their browsing history and have usage statistic reporting enabled, without a Sync passphrase. For your site to appear in the public dataset, it must be discoverable and have sufficient traffic to ensure that all data is anonymous and statistically significant.

Here are two things you absolutely must know about CrUX:

  1. It’s a 28-Day Rolling Average: CrUX data isn’t live. It’s a trailing 28-day snapshot of user experiences. This means when you push a brilliant performance fix, you won’t see its full impact on your CrUX scores for up to a month. It’s a marathon, not a sprint.
  2. It’s All About the 75th Percentile: To evaluate your site’s performance, CrUX focuses on the 75th percentile. This means that to achieve a “Good” score for a metric, at least 75% of your (hard navigation) pageviews must have an experience that meets the “Good” mark. This focuses on the majority experience while ignoring the wild outliers on terrible connections.

You can also slice and dice CrUX data, such as device type, providing a powerful lens into your specific audience’s experience.

Lab Coats vs. The Real World: Why Field Data is King

This is one of the most common points of confusion, so let’s clarify it.

Field Data (The “What”) is what we’ve been talking about—data from real users on their own devices and networks. It’s also known as Real User Monitoring (RUM), and CrUX is the largest public source of it. It captures the beautiful chaos of the real world: slow phones, spotty Wi-Fi, and everything in between. It tells you what is happening.

Lab Data (The “Why”) is what you get from a controlled test, like running Google Lighthouse. It simulates a specific device and network to provide you with a performance report. Lab data is your diagnostic tool. It helps you understand why you’re seeing the numbers in your field data.

Here’s the million-dollar takeaway: Google uses field data from CrUX for its page experience ranking signal, NOT your lab-based Lighthouse score.

Google wants to reward sites that are genuinely fast for real people, not just in a perfect lab setting. Your goal isn’t to achieve a 100% score on Lighthouse; your goal is to ensure at least 75% of your real users pass the Core Web Vitals thresholds.

Lighthouse is the tool that helps you get there.

The Big Three: LCP, INP, and CLS Explained

"A three-panel cartoon showing a website mascot experiencing performance issues. First, labeled 'Slow LCP', the mascot strains to lift a heavy image. Second, labeled 'High INP', the mascot is frozen in a block of ice, unresponsive to a user's click. Third, labeled 'High CLS', the mascot is knocked over by a falling ad block that displaces a button.
A visual guide to Core Web Vital problems: How poor LCP, INP, and CLS create a frustrating user experience.

Core Web Vitals are the metrics that matter most. They measure three key aspects of user experience: loading, interactivity, and visual stability.

Largest Contentful Paint (LCP): Are We There Yet?

  • What it is: LCP measures how long it takes for the largest image or block of text to appear on the screen. It’s an excellent proxy for when a user Feels like the page’s main content has loaded.
  • The goal is to achieve a “Good” result, which is defined as 2.5 seconds or less. “Poor” is over four seconds.
  • Why it Matters for E-commerce: A slow LCP means your customer is staring at a loading screen instead of your product. This initial frustration is a one-way ticket to a high bounce rate.

Interaction to Next Paint (INP): Did That Click Do Anything?

  • What it is: INP measures how responsive your page is to user input. It tracks the delay for all clicks, taps, and key presses during a visit and reports a single value representing the system’s overall responsiveness. A high INP is what users refer to as “janky” or “unresponsive.” It replaced First Input Delay (FID) in March 2024 because it’s a much better measure of the entire user journey.
  • The Goal: “Good” is 200 milliseconds or less. “Poor” is over 500ms.
  • Why It Matters for E-commerce: High INP Kills Conversions. When a user clicks “Add to Cart” and nothing happens instantly, they lose trust and get frustrated. This leads to “rage clicks” and, ultimately, abandoned carts.

Cumulative Layout Shift (CLS): Stop Moving!

  • What it is: CLS measures how much your page’s content unexpectedly jumps around as it loads. It calculates a score based on how much things move and how far they move without the user doing anything 
  • The Goal: “Good” is a score of 0.1 or less. “Poor” is over 0.25.
  • Why it Matters for E-commerce: Have you ever tried to click a button, only to have an ad load and push it down, making you click the wrong thing? That’s high CLS. It’s infuriating for users and makes your site feel broken and untrustworthy.

Step 2: Understand Your Architecture - The PWA Kit's Double-Edged Sword

The Salesforce PWA Kit is engineered for speed, but its modern architecture creates two distinct performance battlegrounds. To win, you need to understand how to fight on both fronts.

The First Impression: Server-Side Rendering (SSR) to the Rescue

A vibrant, two-panel cartoon comparing web rendering methods. The top panel, 'Client-Side Rendering,' shows a stressed user buried in parts from a 'JavaScript Bundle' box. The bottom panel, 'Server-Side Rendering,' shows a happy user cheering as a heroic robot serves them a complete, glowing webpage on a platter.
From frustrating assembly to instant delight: The power of Server-Side Rendering.

When a user first lands on your site, the PWA Kit uses Server-Side Rendering (SSR) to make a great first impression. Here’s the play-by-play:

  1. A user requests a page.
  2. The request hits an Express.js app server running in the Salesforce Managed Runtime (MRT).
  3. On the server, your React components are rendered into a complete HTML document, with all necessary data fetched directly from the Commerce APIs.
  4. This fully baked HTML page is sent directly to the user’s browser.

The huge win here is for your Largest Contentful Paint (LCP). The browser gets a meaningful page instantly, instead of a blank screen and a giant JavaScript file it has to figure out.

The Managed Runtime then takes this to the next level. It has a built-in Content Delivery Network (CDN) that can cache these server-rendered pages. 
If another user requests the same page, the CDN can serve the cached version instantly, completely bypassing the server. A cached SSR response is the fastest you can get, leading to stellar LCP and Time to First Byte (TTFB) scores.

The Main Event: Hydration and Client-Side Interactivity

Once that initial HTML page loads, the magic of hydration happens. The client-side JavaScript bundle downloads, runs, and brings the static HTML to life by attaching all the event handlers and state.

From this moment on, your storefront is a Single-Page Application (SPA). All navigation and UI changes are handled by Client-Side Rendering (CSR). When a user clicks a link, JavaScript takes over, fetches new data, and renders only the parts of the page that need to change, all without a full page reload.

This is the “double-edged sword.” CSR provides that fluid, app-like feel, but it’s also where you’ll find the bottlenecks that hurt your Interaction to Next Paint (INP).

This creates a clear divide: LCP optimisation is a server-side game of efficient rendering and aggressive caching. INP optimisation is a client-side battle against bloated, inefficient JavaScript. 

You can have a fantastic LCP score but still have a terrible user experience due to high INP from clunky client-side code. PWA Kit projects are powerful React apps, and they can get JavaScript-heavy if you’re not careful. And the built-in libraries used, such as Chakra, are not making it easy for you to win this battle.

You need to wear two hats: a backend/DevOps hat for the initial load, and a frontend performance specialist hat for everything after.

The Usual Suspects: Common PWA Kit Performance Bottlenecks

Every PWA Kit developer will eventually face these common performance villains. Here’s your wanted list:

  • Bloated JavaScript Bundles: The Retail React App template is excellent, but if you don’t manage it properly, your JS bundle can become huge. Every new feature adds weight, slowing down hydration and hurting INP.
  • Clumsy Data Fetching: Whether you’re using the old getProps or the new withReactQuery, you can still make mistakes. Fetching data sequentially instead of in parallel, grabbing significantly more data than needed, or re-fetching data on the client that the server has already provided are all common ways to slow down TTFB and LCP.
  • Unruly Third-Party Scripts: These are public enemy #1. Scripts for analytics, ads, A/B testing, and support chats can be performance nightmares. They block the main thread, tank your INP, and can even mess with your service worker caching.
  • Poorly Built Custom Components: A single custom React component that isn’t optimised for performance can significantly impact your INP. This typically occurs through expensive calculations on every render or by triggering a chain reaction of unnecessary re-renders in its children.
  • Messed-Up Caching: The MRT’s CDN is powerful, but it’s not magic. If you don’t set your Cache-Control headers correctly, fail to filter out unnecessary query parameters, or misconfigure your API proxies, you’ll experience a poor cache-hit ratio, and all the benefits of Server-Side Rendering (SSR) will be lost.
A colorful cartoon of a chaotic factory illustrating four web performance bottlenecks. The bottlenecks shown are: a giant truck labeled 'Large Bundle Size' blocking the entrance, many small pipes labeled 'Network Waterfalls' slowly filling a tank, a complex machine for a simple task labeled 'Re-render Storms', and workers slipping on puddles labeled 'Memory Leaks'.
Inside a struggling SPA: A visual guide to common performance bottlenecks.

Step 3: The Performance Playbook - Your Guide to a Faster Storefront

Now that you know the what and the why, let’s get to the how. Here are the specific, actionable plays you can run to build a high-performance storefront.

Master Your Data Fetching

How you fetch data is critical for a fast LCP and a snappy experience.

  • Use withReactQuery for New Projects: If you’re on PWA Kit v3+, withReactQuery is your best friend. It utilises the popular React Query library to make fetching data on both the server and client a breeze. It smartly avoids re-fetching data on the client that the server has already retrieved, which means cleaner code and improved performance.
  • Optimise getProps for Legacy Projects: Stuck on an older project? No problem. Optimise your getProps calls:
    • Be a Minimalist: Return only the exact data your component needs for its initial render. Don’t send the whole API response object. This keeps your HTML payload small.
    • Go Parallel: If a page needs data from multiple APIs, use Promise.all to fire off those requests at the same time. This is way faster than waiting for them one by one.
    • Handle Errors with Finesse: For critical errors (such as a product not found), throw an HTTPError to display a proper error page. For non-critical stuff, pass an error flag in props so the component can handle it without crashing.
  • Fetch Non-Essential Data on the Client: Anything that’s not needed for the initial, above-the-fold view (such as reviews or related products) should be fetched on the client side within an useEffect hook. This enables your initial page to load faster, improving TTFB and LCP.

Whip Your JavaScript and Components into Shape

Your client-side React code is the most significant factor for INP. Time to optimise.

  • Split Your Code: The PWA Kit is already set up for code splitting with Webpack, so use it! Load your page-level components dynamically with the loadable utility. This means the code for the product detail page is only downloaded when a user visits it, thereby shrinking your initial bundle size.
  • Lazy Load Below-the-Fold: For heavy components that are “below the fold” or in modals, use lazy loading.
  • Stop Wasting Renders: Unnecessary re-renders are a top cause of poor INP. Use React’s memoisation hooks like a pro:
    • React.memo: Wrap components in React.memo to stop them from re-rendering if their props haven’t changed. Perfect for simple, presentational components.
    • useCallback: When you pass functions as props to memoised children, wrap them in useCallback. This maintains the function’s reference stability, preventing the child from re-rendering unnecessarily.
    • useMemo: Use useMemo for expensive calculations. This caches the result so it’s not recalculated on every single render.
  • Be Smart with State: The Context API is great, but be careful. Any update to a context re-renders all components that use it. For complex states, break your contexts into smaller, logical pieces (like a UserContext and a CartContext) to keep re-renders contained.

Become a Caching Ninja with Managed Runtime

Getting your CDN cache hit ratio as high as possible is the single most effective way to boost LCP for most of your users.

  • Set Granular Cache-Control Headers:
    • Per-Page: Inside a page’s getProps function, you can set a custom cache time. A static “About Us” page can be cached for days (res.set(‘Cache-Control’, ‘public, max-age=86400’)), while a product page might be cached for 15-30 minutes.
    • Use stale-while-revalidate: This header is pure magic. Cache-Control: s-maxage=600, stale-while-revalidate=3600 tells the CDN to serve a cached version for 10 minutes. If a request comes in after that, it serves the stale content instantly (so the user gets a fast response) and then fetches a fresh version in the background. It’s the perfect balance of speed and freshness.
  • Build Cache-Friendly Components: To be cached, your server-rendered HTML needs to be generic for all users. Any personalised content (like the user’s name or cart count) must only be rendered on the client. A simple trick is to wrap it in a check:

    {typeof window!== ‘undefined’ && <MyPersonalizedComponent />}.

    This ensures it only renders in the browser.

  • Filter Useless Query Parameters: Marketing URLs often contain “unnecessary” parameters, such as gclid and utm_tags, which make every URL unique and prevent your cache from being effective. Edit the processRequest function in app/request-processor.js to strip these parameters before checking the cache. This allows thousands of different URLs to access the same cached page.
  • Cache Your APIs: By default, proxied requests aren’t cached by the CDN. This setting lets you use proxy requests in your code without worrying about accidentally caching responses. If you want a proxied request to be cached by the CDN, simply change the path prefix from proxy to caching.

Tame the Third-Party Script Beast

A cartoon developer is taming a large 'beast' made of code and tangled wires. The developer is putting a collar labeled 'async' and holding a leash labeled 'defer' on the beast, while corralling other parts of it towards a pen labeled 'Lazy Load Zone'.
Taming the Third-Party Script Beast: A visual guide to managing external scripts for better web performance.

Third-party scripts are performance killers. You need to control them.

  • Audit and Justify: Open Chrome DevTools and look at the Network panel. Make a list of every third-party script. For each one, ask: “Do we need this?” If the value doesn’t outweigh the performance cost, eliminate it.
  • Load Asynchronously: Never, ever load a third-party script synchronously. Always use the async or defer attribute. Async lets it download without blocking the page, and defer makes sure it only runs after the page has finished parsing.
  • Lazy Load Widgets: For things like chat widgets or social media buttons, don’t load them initially. Use JavaScript to load the script only when the user scrolls near it or clicks a placeholder.
  • Use a Consent Management Platform (CMP): A good CMP integrated with Google Tag Manager (GTM) is a must-have. It stops marketing and ad tags from loading until the user gives consent. This is great for both privacy and performance.
  • Check Your Service Worker: Your PWA’s service worker might block requests to domains that aren’t on its whitelist. When adding a new third-party script, ensure its domain is configured correctly in your service worker to prevent blocking.

Create Bulletproof, Lightning-Fast Images

Images are usually the heaviest part of a page. Optimising them is non-negotiable.

  • Serve Responsive Images: Use the <picture> element or srcset and sizes attributes on your <img> tags. This allows the browser to select the perfect-sized image for the device, so a phone doesn’t have to download a massive desktop image.
  • Use Modern Formats: Use WebP format for images whenever possible. It provides significantly better compression than JPEG or PNG, often cutting file size by 25-35% without noticeable quality loss. Cloudflare only supports WebP. If you use a third-party image provider, check what’s available, as there are now more modern options, including AVIF.
  • Compress, Compress, Compress: Use an image optimisation service or build tools to compress your images. A JPEG quality of 85 is usually a great sweet spot.
  • Prevent Layout Shift with Dimensions: This is a super-easy and effective fix for CLS. Always add width and height attributes to your <img> and <video> tags. This allows the browser to reserve the right amount of space before the media loads, preventing the annoying content jump.
  • Lazy Load Offscreen Images: For any image that’s not in the initial viewport, add the native loading=”lazy” attribute. This instructs the browser to delay loading those images until the user scrolls down to them, which significantly improves performance.

Step 4: Make it a Habit - Monitoring, Debugging, and Continuous Improvement

Performance isn’t a one-and-done task. It’s a discipline. You need a solid workflow to monitor, debug, and prevent problems from creeping back in.

Your Performance-Sleuthing Toolkit

You have a powerful set of free tools to become a performance detective.

  • PageSpeed Insights (PSI): This is your starting point. The top section, “Discover what your real users are experiencing,” is your CrUX field data—your final report card. The bottom section, “Diagnose performance issues,” is your lab data from Lighthouse. Use its “Opportunities” and “Diagnostics” to find the technical fixes you need to improve your field data scores.
  • Google Lighthouse: Running Lighthouse from Chrome DevTools provides the most detailed results. Dig into the recommendations to find render-blocking resources, massive network payloads, and unused JavaScript. The “Progressive Web App” audit is also crucial for making sure your service worker and manifest are set up correctly.
  • Chrome DevTools:
    • Performance Panel: This is your primary tool for identifying INP issues. Record a page load or interaction to get a “flame chart” of everything the main thread is doing. Look for long tasks (marked with a red triangle) to find the exact JavaScript functions that are causing lag.
    • Network Panel: Use this to inspect all network requests. Check your Cache-Control headers, analyse asset sizes, and use “Request blocking” to temporarily disable third-party scripts to see how much damage they’re doing.
    • Application Panel: This is your PWA command centre. Inspect your manifest, check your service worker’s status, clear caches, and simulate being offline to test your app’s reliability.

Symptom / Poor Metric

Likely PWA Kit Cause(s)

Recommended Diagnostic Tool(s)

Actionable Solution(s)

Poor LCP on Product Detail Page

1. Large, unoptimized hero image.

2. Slow, sequential API calls in getProps/useQuery during SSR.

3. Low CDN cache hit ratio.

1. PageSpeed Insights to identify the LCP element.

2. ?__server_timing=true to check ssr:fetch-strategies time.

3. MRT logs and CDN analytics.

1. Compress hero image, serve in WebP format, use srcset.

2. Refactor data fetching to use Promise.all or a single aggregated API call.

3. Set longer Cache-Control headers.

Poor INP on Product Listing Page

1. Long JavaScript task during client-side hydration.

2. Excessive re-renders when applying filters.

3. A blocking third-party analytics script.

1. DevTools Performance Panel to identify long tasks.

2. React DevTools Profiler to visualize component renders.

3. DevTools Network Panel to block the script and re-test.

1. Code-split the PLP’s JavaScript.

2. Use React.memo, useCallback, and useMemo on filter components.

3. Defer or lazy-load the third-party script.

High CLS on Homepage

1. Images loading without width and height attributes.

2. A cookie consent banner or ad injected dynamically.

3. Web fonts causing a flash of unstyled text (FOUT).

1. Lighthouse audit to identify elements causing shifts.

2. DevTools Performance Panel with “Screenshots” enabled to see the shifts happen.

1. Add explicit width and height to all <img> tags.

2. Reserve space for the banner/ad with CSS.

3. Preload key fonts using <link rel=”preload”>.

PWA Kit-Specific Debugging Tricks

The PWA Kit has some built-in secret weapons for debugging.

  • The __server_timing Parameter: Add ?__server_timing=true to any URL in your dev environment. You’ll get a Server-Timing header in the response that breaks down exactly how long each part of the SSR process took. It’s perfect for figuring out if a slow response is because of a slow API or a heavy React component.
  • The ?__server_only Parameter: Use this parameter to see the pure, server-rendered version of a page without any client-side JavaScript. It’s great for seeing what search engines see and for spotting layout shifts between the server and client versions.
  • Managed Runtime Log Center: In production, the Log Center is your go-to for troubleshooting. You can search and filter logs from your app server to diagnose server-side errors and performance issues that only show up in the wild.

Wrapping Up: Your Journey to a High-Performance Storefront

Building a blazingly fast storefront with the Salesforce PWA Kit isn’t about finding one magic trick. It’s a discipline. It requires understanding what users care about, knowing your architecture’s strengths and weaknesses, and committing to a cycle of measuring, optimising, and monitoring.

In the cutthroat world of B2C commerce, that’s not just a nice-to-have—it’s the ultimate competitive advantage that drives real revenue.

The post From Lag to Riches: A PWA Kit Developer’s Guide to Storefront Speed appeared first on The Rhino Inquisitor.

]]>
Mastering Sitemaps in Salesforce B2C Commerce: A Developer’s Guide https://www.rhino-inquisitor.com/mastering-sitemaps-in-sfcc/ Mon, 16 Jun 2025 07:30:19 +0000 https://www.rhino-inquisitor.com/?p=12947 In Salesforce B2C Commerce Cloud (SFCC), the sitemap is more than just a list of links. It's a powerful, scalable system for telling search engines exactly what's on your site and how important it is. Getting it right means faster indexing, better visibility, and a happier marketing team. Getting it wrong can leave your brand-new products invisible to Google.

The post Mastering Sitemaps in Salesforce B2C Commerce: A Developer’s Guide appeared first on The Rhino Inquisitor.

]]>

Let’s be honest, as developers, “SEO” can sometimes feel like a four-letter word handed down from the marketing team. But what if I told you that one of the most critical SEO tools, the sitemap, is actually a fascinating piece of platform architecture you can control, automate, and even extend with code?

In Salesforce B2C Commerce Cloud (SFCC), the sitemap serves a purpose beyond simply listing links. It is a robust and scalable system that communicates to search engines precisely what content exists on your site and its level of importance. Properly configuring your sitemap results in faster indexing, improved visibility, and greater satisfaction for your marketing team. If done incorrectly, however, it could render your newly launched products invisible to search engines like Google, as well as tools such as ChatGPT and Google Gemini.

This guide will walk you through everything you need to know, from leveraging the powerful out-of-the-box tools to writing custom integrations and mastering sitemaps in a headless PWA Kit world.

The "Easy Button": The Built-in Sitemap Generator

Think of the standard SFCC sitemap generator as the “easy button” that handles 80% of the work for you. For massive e-commerce sites with millions of URLs, this is a lifesaver.

At its core, the platform cleverly sidesteps search engine limitations, like the 50,000 URL or 10MB file size cap, by creating a two-tiered system. It generates a main sitemap_index.xml file, which is the only URL you need to give to Google. This index file then points to a series of child sitemaps (sitemap_0.xml, sitemap_1.xml, etc.) that contain the actual URLs.

You control all of this from the Business Manager: Merchant Tools > SEO > Sitemaps.

Your Control Panel: The Settings Tab

A screenshot of the Sitemap Settings in Merchant tools in Salesforce B2C Commerce Cloud
The Sitemap Settings in the Business Manager

The Settings tab is your main control panel. Here’s what you, as a developer, need to care about 4:

  • Content Inclusion: You can choose exactly what gets included: products, categories, content assets, and even product images.
  • Priority & Change Frequency: These settings are direct hints to search engine crawlers. Priority (a scale of 0.1 to 1.0) suggests a URL’s importance relative to other pages on your site. Change Frequency (from always to never) suggests how often a page’s content is updated.
  • Product Rules: You can get granular, choosing to include only available products, available and orderable products, or all products. This directly ties into your inventory and data strategy.
  • hreflang for Multi-Locale Sites (Alternate URLs): If you manage a site with multiple languages or regions, enabling Include alternate URLs (hreflang) is a huge win. It automatically adds the necessary tags to tell search engines about the different versions of a page, a task that can be a manual pain on other platforms.

The Golden Rule of Scheduling

Screenshot of the "Job" tab in Sitemap configuration in the Business Manager
The Job tab

You can run the sitemap generation manually or, more practically, schedule it as a recurring job from the Job tab. Here is the single most important operational detail:

Always schedule the sitemap job to run after your daily data replication from the staging instance. 

If you run it before, all the new products and content from that day’s replication will be missing from the sitemap, rendering it stale the moment it’s created.

Going Custom: When the Built-in Isn't Enough

The Custom tab in the Sitemap settings in the Business Manager
The "Custom Sitemaps" tab

What happens when you have content that doesn’t live in SFCC? Maybe you have a WordPress blog, an external reviews provider, or a separate forum. You need to get those URLs into your site’s sitemap index. SFCC gives you two powerful paths to do this.

Path 1: The Classic Job Step (The Batch Approach)

The traditional method involves building a custom job step within a cartridge. This is ideal for batch-oriented processes, such as pulling a sitemap file from an SFTP server on a nightly basis.

Your script would use the dw.sitemap.SitemapMgr script API. The key method is SitemapMgr.addCustomSitemapFile(hostName, file), which takes a file your script has fetched and places it in the correct directory to be picked up by the main sitemap generation job. This requires some classic SFCC development: writing the script and defining the job step in a steptypes.json or steptypes.xml file.

Path 2: The Modern SCAPI Endpoint (The Real-Time Approach)

For a more modern, API-first architecture, consider using the Salesforce Commerce API (SCAPI). The Shopper SEO API provides the uploadCustomSitemapAndTriggerSitemapGeneration endpoint.

This is a PUT request that enables a trusted external system to upload a custom sitemap file directly to SFCC and initiate the generation process asynchronously. This is the ideal solution for event-driven systems. For example, a headless CMS could use a webhook to call this endpoint the instant a new article is published, getting that URL into the sitemap almost immediately.

Choices... choices

Integration Method

Best For

Mechanism

Vibe

Manual Upload

One-offs, testing

UI in Business Manager 

Quick & Dirty

Script API Job

Batch processes (e.g., nightly sync)

Custom job step using dw.sitemap.SitemapMgr

Classic & Reliable

SCAPI Endpoint

Real-time, event-driven integrations

PUT request to the Shopper SEO API

Modern & Agile

Sitemaps in the Headless Universe: PWA Kit Edition

Going headless with the Composable Storefront (PWA Kit) changes the game, but the sitemap strategy remains firmly rooted in the backend—and for good reason. The SFCC backend is the system of record for the entire product catalog. 

Forcing the PWA Kit frontend to generate the sitemap would require an API call nightmare to fetch all that data.

Instead, you use the backend’s power and bridge the gap.

The Standard Headless Playbook

  1. Configure the Hostname Alias: This is the most critical step. In Business Manager (Merchant Tools > SEO > Aliases), you must create an alias that exactly matches your PWA Kit’s live domain (e.g., www.your-pwa.com). This ensures the backend generates URLs with the correct domain.
  2. Generate in Business Manager: Use the standard job you’ve already configured.
  3. Update robots.txt: In your PWA Kit project’s code, add the Sitemap directive to your robots.txt file, pointing to the full URL of the sitemap index (e.g., Sitemap: https://www.your-pwa.com/sitemap_index.xml).
  4. Proxy the Request: Your PWA Kit app needs to handle requests for the sitemap. You can add a rule to your server-side rendering logic (often in app/ssr.js) to proxy requests for /sitemap_index.xml and its children to the SFCC backend where the files actually live. Or use the eCDN for this job!

The Hybrid Approach for PWA-Only Routes

But what about pages that only exist in your PWA? Think of custom React-based landing pages or an “About Us” page that isn’t a content asset in SFCC. The backend generator has no idea these exist.

The solution is an elegant hybrid approach that you can automate in your CI/CD pipeline:

  1. Backend Generates Core Sitemap: The scheduled job on SFCC runs as normal, creating sitemaps for all products, categories, and content assets.

  2. Frontend Generates Custom Sitemap: As a build step in your CI/CD pipeline, run a script that scans your PWA Kit’s routes and generates a small, separate sitemap file (e.g., pwa-custom.xml) containing only these frontend-specific URLs.

  3. Automate the Merge: The final step of your deployment script makes a PUT request to the uploadCustomSitemapAndTriggerSitemapGeneration SCAPI endpoint, uploading the pwa-custom.xml file. This tells SFCC to regenerate the main index, adding a link to your new custom file.  

This strategy uses the right tool for the job: the backend’s efficiency for the massive catalog and the frontend’s build process to handle its own unique pages.

Conclusion

By mastering these tools and strategies, you can transform sitemap management from a chore into a powerful, automated part of your development and deployment workflow. You’ll build more robust sites, ensure content is discovered faster, and become an SEO hero in the process.

The post Mastering Sitemaps in Salesforce B2C Commerce: A Developer’s Guide appeared first on The Rhino Inquisitor.

]]>