Scenario
A very large MasterList is periodically updated with a small create/update/delete feed. It is computationally expensive and time consuming to create a completely new master list stream, when only (for example) 220 out of 150,000,000 records are actually updated.
...
- Create the Stream to act as a MasterList keyed on a unique identifier.
- Create streams and/or file collectors to update the MasterList.
- Create Secondary Streams to pull data from the MasterList Stream, along Non-Historied pipes.
- Group non-historied pipes by the unique identifier
- Construct the attribute logic in those secondary streams can distinguish the correct data from all the records that share the unique identifier.