Scenario
Files (or database records) can often show up with duplicate data. Often it is OK, and sometimes it is required to ignore duplicate records.
Duplicate data, or data with duplicate keys are a feature of most enterprise systems. PhixFlow has lots of ways of dealing with duplicated data, and how a model uses them depends entirely on the system requirements. In this case, we just want to ignore duplicates.
Step-by-step guide: Identifying Duplicate Records
- Click to select the stream that may contain duplicates.
- Right-click on the model view pane, and select 'Merge selected streams'.
- in the pipe configuration dialog that pops up, group on the field with duplicated data and click the green tick to save your input.
- in the Automatic Stream Configuration dialog that appears, select 'just key attributes' from the drop down.
- Run analysis on the stream that results. Viewing the data, it can be seen that for each value of the grouping key,
PhixFlow reports the number of records in that group, and also highlights lines where it is greater than one.
Step-by-step guide: Removing Duplicates
- Load all data (including duplicates) into a stream
- Create a new stream from this stream - make it an aggregate stream.
- On the pipe linking the two streams, set the maximum number of records to be one and group it on the field with duplicated data. Apply sorting on another attribute, depending on which record you want. E.g. to get the latest record, sort by the last updated date.
- As an alternative to setting the maximum number of records per group, use the syntax in[1].value to just retrieve the first of the grouped records from a group with more than one record, in the attribute expressions in the aggregate stream.
Related articles
Filter by label
There are no items with the selected labels at this time.