Forms: Pipe
A pipe is a connector that links two elements in a PhixFlow model. A pipe joining a datasource to a data collector has no editable details.
Form: Pipe Details
The following fields are configured on the Details tab:
...
- Push: data is 'pushed' rather than 'pulled' into the output stream. Every time data is written to the input stream, the pipe 'informs' the output stream and the Analysis Engine will attempt to run stream generation on the output stream. Push pipes are shown as blue lines on modelling screens.
- Pull: This type of pipe will pass data along as it is calculated which may be before the streamset is complete or any of the records have been committed to the database. This may improve performance over other pipes which wait until the input streams have committed their data to the database before reading the data back in again (which allows the database to apply the filtering and aggregation rules). Because PUSH pipes don't get their data from the database, but instead receive it direct from the input stream, any filters or aggregation specified on the pipe will be ignored. Pull pipes are shown as blue lines on modelling screens.
- Look-up: used to reference data, see lookup for more details. Look-up pipes are shown as dotted lines on modelling screens.
...
This field is used to determine which Streamsets to read from the input Stream. There are 4 options available:
- Latest: This will cause the pipe to read only the latest Streamset from the input Stream.
- Previous: This will cause the pipe to read the Streamset before the latest Streamset on the input Stream.
- All: This will cause the pipe to read all Streamsets from the input Stream. However, In some circumstances the input Stream may have Streamsets that have dates in the future relative to the Streamset the output Stream is creating. This may happen for example if you have rolled back a number of Streamsets on the output Stream but have not rolled back the corresponding Streamsets on the input Stream and have then requested that one of the rolled back Streamsets be rebuilt. Some of the Streamsets on the input Stream will then have dates in the future relative to the Streamset you are rebuilding.
By default the pipe will ignore any Streamsets with dates in the future relative to the Streamset you are generating so that if you are rebuilding an old Streamset the pipe will retrieve the same data on the rerun as it retrieved when the Streamset was first built.
Similarly if you are running a Transactional Stream then it is possible that while your analysis run is taking place, other analysis runs which started after yours may have managed to complete before yours thereby generating additional Streamsets on the input Stream with a future data relative to the date of the Streamset you are generating.
For Transactional input Streams it is possible to tell the pipe not to ignore these future Streamsets by ticking the Read Future Data tickbox on the Advanced tab. - Custom: If this option is selected then you need to specify which input Streamsets to read by configuring combinations of the following fields on the Advanced tab:
- Read Future Data
- Only collect from the same run
- Max Stream Sets
- Historied
- From Data Offset
- To Date Offset
...
Normally when a Pipe requests data from a non-static input Stream then that Stream will first attempt to bring itself up to date, generating new Streamsets as necessary, before supplying the data requests. However, if this field is ticked, the input Stream will not attempt to do this.
...
If this flag is not ticked then it is an indication to PhixFlow that the Stream is not ready to be used during any analysis runs and should be therefore be ignored.
The following fields are configured on the Advanced tab:
...
Mandatory
...
If ticked, when multiple Streams are being merged then there must be an input record from this Pipe for an output record to be generated by the output Stream.
If this is a push pipe with positive offsets and this flag is ticked then the notification to create another stream set will only be pushed along the pipe if the last stream set created contains at least one record.
...
The Execution Strategy determines how this pipe should be implemented.
Where this pipe is a push or pull pipe into a Merge Stream, the Default Execution Strategy is to select all stream items from the input Stream sorted by the Group By attributes, then to read items from all input pipes simultaneously, constructing candidate sets from items with matching key values.
Where the Directed Execution Strategy is applied to a pipe (the pipe must not be Mandatory), the other pipes with the Default Strategy operate as above, each being sorted then merged to generate a sequence of candidate sets; the Directed pipe then runs worker tasks to select the additional items by matching key value. These selects are batched up so that each worker reads items for many key values in a single select (see Worker Size), and many workers are run in parallel (see Max Workers).
In general, the Directed strategy should only be used where
- the number of items in the source Stream is so large that the sorting phase of the Default strategy takes too long, or
- only a small subset of the items are needed (the majority being discarded because they have key values that don't match the key values read on one of the other mandatory pipes).
Changing the Execution Strategy will make the Merge faster or slower depending on the input data and the details of the input and output Streams, but will not change the business logic of the Merge (i.e. which input items are grouped into candidate sets).
If the input to the pipe is a Collector, the list of key values is made available as _keyList. This will typically be used with an in clause in the Collector query. For example:
select * from customer where account_num in ({_keyList}) order by account_num
...
The maximum number of concurrent worker tasks.
If blank, this defaults to 1.
...
The number of key values to read for a single worker task (which runs a single select statement).
If blank, this defaults to 1000. This is the maximum value that can be used when reading from an Oracle database.
...
When doing a lookup, there are two common scenarios:
- The pipe does a single lookup onto a stream or database table to get a large number of records in one go (e.g. 1000 records)
- The pipe does many lookups, getting a small number of records for each lookup (e.g. 10 records at a time).
Try to estimate the largest number of records that the lookup pipe reads on a single read from a stream or database collector.
The cache is used when carrying out lookups from streams or database collectors. During a lookup, PhixFlow will retrieve records in sets that match the filter on the pipe, e.g.
Code Block |
---|
WHERE AddresssLine1 = _out.Address |
For efficiency, the records are cached (stored temporarily in memory) so that if the same set of records need to be looked up again they are readily available without going back to the database.
There is a limit to how many records can be stored in the cache. The Cache Size field allows you to specify this limit. If no value is set, it will default to the system-wide default, specified in the maximum pipe cache size in the System Tuning tab of the System Configuration dialog.
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
If a single read brings back a number of records exceeding 90% of the specified cache size, a warning message will be logged to the console. If a single read brings back 100% or more of the cache size, and the enforce cache size limit flag is ticked in the system configuration, then the analysis run will stop completely. The Pipe "stream_name.lookup_pipe_name" cache is 100% full (the cache size is 10).
A model administrator can use this information to make an informed decision about the design of the model. |
...
The offset applied to the start of the collection period, relative to the period in the output stream that requires populating.
The units are the period of the output stream, that is, if the output stream has a daily period, then setting from date offset = -1.0 means that the start of the collection period will be 1 day earlier than the start of the period in the output stream that is being calculated.
If this is a push pipe then a positive offset can be input. This will tell the stream to run again and generate another stream set.
...
The offset applied to the end of the collection period, relative to the period in the output stream that requires populating.
The units are the period of the output stream, that is, if the output stream has a daily period, then setting to date offset = -1.0 means that the end of the collection period will be 1 day earlier than the end of the period in the output stream that is being calculated.
If this is a push pipe then a positive offset can be input. This will tell the stream to run again and generate another stream set.
...
- Only collect from the same run
- Max Stream Sets (this may also be set to zero)
- Historied
...
During Look Ups
...
During File Export
...
During Drill Down
...
The following fields are configured through separate tabs on the form:
...
Form Icons
The form provides the standard form icons.
The form also provides the following icons on the Filter tab:
Adds a clause to the filter. | |
Deletes the selected clause or condition from the filter. | |
Adds a condition to a clause of the filter. |
The form also provides the following icons on both the Sort/Group and Aggregate Attributes tabs:
Shows the list of attributes that can be added as sort/group or aggregate attributes. | |
Deletes the selected object from the list. | |
Adds an object to the list. |
See Also
...
Insert excerpt | ||||||||
---|---|---|---|---|---|---|---|---|
|