Unprecedented Scale

The True Size of Serialization

As the industry moves from the lot or shipment level to serializing product at the unit level, the transaction processing equivalent is like taking one single step vs. climbing Mount Everest in the same amount of time. 

Serialization requires that you generate, process, and store dramatically increased data volumes, which will delay your supply chain operations if solution response times take more than even one second.

The massive size of the database and transaction volume, constrained by the time required by operational processes, calls for a paradigm shift in technology. TraceLink is the only company to address this challenge with a differentiated platform and system architecture.  



Download White Paper

Why Traditional Relational Database Architectures Do Not Scale

Before serialization, shipping a batch of 10,000 units would require four simple transactions: (1) Create the batch, (2) Pick the pallet, (3) Ship the pallet, and (4) Send an ASN. With serialization, you’ll need to process 60,000 transactions for the same batch as you update data that corresponds to every single unit.

Most solution providers attempt to use an EPCIS schema with a relational database management system (RDBMS) for serialization. Unfortunately, this approach begins to fail when handling the exponential rate in which serialization data is created and replicated across tables, along with the transaction processing speeds required to perform at operational scale. 
This is why RDBMS-EPCIS vendors require constant archiving in order to limit the size of the database, which is unrealistic when you need to process real-time verifications for product returns.

Achieving Unparalleled Scalability at the Lowest Cost

TraceLink is live with hundreds of customers and purpose-built to commission, aggregate, process, and ship batches of serial numbers at scale by leveraging:

  • An elastic computing fabric that can meet peak loads or dynamically shrink based on need.
  • SQS queues that distribute work across tens to thousands of machines to achieve high throughput, and durability of 99.999999999%. 
  • Dynamo, a NoSQL database that holds trillions of objects for consistent read/write times.
  • RedShift, a petabyte-scale analytics engine for storing and analyzing EPCIS events. 
  • Sub-second response times, preserving warehouse efficiencies.
Download TCO Infographic

Risky Business: Archiving Serial Numbers Freezes Real-Time Access

RDBMS-EPCIS vendors require constant archiving in order to limit the size of the data set and maintain performance.

This approach of purging and archiving is unrealistic when you need serialized data available for processing saleable returns and verifications in real-time. To solve internet-scale problems of data retrieval, a native-cloud architecture is the only scalable way to ensure that serialized transactions are always available in real-time, whether it's business data from last year or last week.

The Strengths of Amazon Web Services: Scalability, Availability, and Elasticity

Applications specifically designed to run inside of Amazon Web Services can fully capitalize on the scalability, availability, and elasticity of the Amazon environment, which has surpassed traditional data center limitations around relational technology, disaster recovery, and the flexibility of resources.

In a serialized world, the best way to ensure supply is with a track and trace solution that is always available, massively scalable, and provides top-notch performance, including the ability to read numbers out of its serialization repository in excess of 75,000 a second.