A pipe is a connector that links two elements in a PhixFlow model and sends data from the input to the output. Pipes allows you to control which attributes and which records from the input are delivered by to the output, although in most cases - with minimal configuration - you will get all columns and the records from the current run.
The pipe must be enabled to make it active.
For advanced configuration, see Advanced Pipe Configuration.
Panel | ||||
---|---|---|---|---|
| ||||
A pipe joining a datasource to a data collector has no details to edit. All the configuration for the output data set occurs in the collector - either a database collector for a database datasource, or an HTTP collector for an HTTP datasource. |
Name
The name is used to refer to the pipe in other model elements.
Panel | ||||
---|---|---|---|---|
| ||||
The name can have no special characters except the underscore character '_', it has to start with a letter and cannot be an Attribute Function name. |
Type
There are 3 options available:
...
Data To Read
Specify what input data to use. There are 4 options available:
...
Panel | ||||
---|---|---|---|---|
| ||||
If the Only collect from same run flag is ticked, the pipe will only collect data from inputs in the same analysis run. This is only used when building a transactional model, and this configuration support several analysis runs going on at the same time without interfering with each other. |
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
In some circumstances the input Stream may have Stream Sets that have dates in the future relative to the Stream Set being generated for the output Stream. This may happen, for example, if you have rolled back a number of Stream Sets on the output Stream but have not rolled back the corresponding Stream Sets on the input Stream, and have then requested that the output Stream is brought up to date. Some of the Stream Sets on the input Stream will have dates in the future relative to some of the Stream Sets you are rebuilding. By default, pipes will ignore any Stream Sets with dates in the future relative to the Stream Set you are generating. This is so that if you are rebuilding an old Stream Set the pipe will retrieve the same data on the rerun as it retrieved when the Stream Set was first built. Similarly, if you are running a Transactional Stream, it is possible that while your analysis run is taking place, other analysis runs which started after yours may have completed before yours. These will have generated additional Stream Sets on the input Stream with a future data relative to the date of the Stream Set you are generating. For Transactional input Streams it is possible to tell the pipe not to ignore these future Stream Sets by ticking the Read Future Data tick box on the Advanced tab. |
Static
Normally when a pipe requests data from a non-static input stream, that stream will first attempt to bring itself up to date, generating new stream sets as necessary, before supplying the data requested. However, if this field is ticked, the input stream will not run.
Mandatory
If ticked, when multiple Streams are being merged then there must be an input record from this Pipe for an output record to be generated by the output Stream.
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
If this is a push pipe with positive offsets and this flag is ticked then the notification to create another stream set will only be pushed along the pipe if the last stream set created contains at least one record. |
Multiplier
This causes the pipe to present each candidate set to the output stream in a different way than usual. The multiplier flag is on the Advanced tab of the form.
For each output record generated by a stream, the stream will get a set of records from each of its input pipes. If the multiplier flag is ticked on one of these, then the stream will generate an output record for each record from the set of records provided by the multiplier pipe. For each output record, each of the other input pipes will provide the same set of records as normal.
Filters, sorting and grouping, aggregating
Filters, sorting and grouping, and aggregating are configured through their own sections on the form:
...
The data being delivered by a pipe can be filtered.
Filters are made up of a set of clauses; each clause in turn contains a number of conditions. These conditions must be satisfied for data to be passed through the pipe.
Form Icons
The form provides the following buttons:
...
...
Add a clause or condition.
...
...
Delete a clause or condition.
...
...
Specifies that the value entered is a literal value. Click this icon to change this - to specify that the value entered be evaluated as an expression.
...
...
Specifies that the value entered is a evaluated as an PhixFlow expression. Click this icon to change this - to specify that the value entered be treated as a literal. Note : ["123", "234", "345"] looks like a literal value but it can be evaluated as an expression.
...
...
Filter on Current User
Sometimes when running analysis you want to select, from the source, only records belonging to the currently logged in user. To set a filter where, say, an attribute in the source Owner
equals the current logged in user, add a condition to the filter like this:
Owner
Equals _user.name
fx
Enter a list of values for an "Is In" or "Is Not In" filter
If you want to based on a list of values, use the Is in or Is not in comparators, then type the list of values into the comparison field as a comma separated list like this:
Country
Is in England, France, Germany
ABC
In this case you must NOT click the ABC icon to convert the value to an fx, because this will indicate that the value is a formula; it must be left as a literal value. If you do click the ABC icon, then the value must be entered like this:
Country
Is in ["England","France","Germany"]
fx
Cache Extraction Filter
A cache extraction filter allows you to further filter the data retrieved by a pipe. These are not commonly used, but are sometimes helpful when either:
- Optimising performance on a lookup pipe when for a set of records, the record you require from the lookup depends on non-key data, e.g. the date
- When getting data from a pull pipe when the filter requires that you compare one value in each record with another; this is not possible within a standard filter.
For case 1, when using a lookup pipe, data retrieved is stored in a cache. See cache size for details. The cache extraction filter allows you, as you are processing a set of output records, to use different cached entries from the lookup for each of the records are you are processing. This is very fast compared to looking up from the source (i.e. going back to an external DB table or even another PhixFlow stream) for each output record.
E.g. you want to look up the credit rating for a customer for a set of transactions - in the output, each transaction is represented by a single output record. You create an indexed lookup pipe using CustNo as the key for the index. This means that for each new CustNo you encounter in the data, all the credit rating entries for that CustNo would be retrieved by the pipe and placed into the cache. The credit rating for each customer is fully historied, so you get a number of entries for each CustNo. To get the relevant lookup entry for each output report (each transaction), you need to compare the transaction date of the output record to the dates of credit rating entries in the cache. So to extract the relevant record, you include a cache extraction filter in the form:
Code Block |
---|
StartDate >= _out.TransDate && (EndDate <= _out.TransDate || EndDate == _NULL) |
Cache extraction filters are entered free hand.
The attribute names referenced must exist in a stream. This means that the each attribute must be one of:
- an attribute in a source stream, if you are reading from a stream
- if you are reading from an external database table, one of the fields returned by the database collector AND an attribute in the output stream - i.e. to use an attribute with the source as a database collector, there must be an attribute of matching name in the output stream
- an attribute in the destination stream, in which case you will refer to it using the format
_out.AttributeName
...
A Pipe can be grouped and sorted by attributes of the input stream. These are set up in the Group/Order section of the Pipe form. In fact, this section is called Sort/Group for pull and push pipes, and Order/Index for lookup pipes.
The following fields are configured at the level of the pipe:
...
Insert excerpt | ||||||||
---|---|---|---|---|---|---|---|---|
|
...
...
...
...
...
...
...
...
...
...
Form: Grouping Attribute Details
The following fields are configured for each grouping attribute:
...
This field is only available for lookup pipes where you have selected indexing.
If the pipe is configured as a Look-up with an index match type set, this field becomes available. Look-up pipes can be configured for fast "indexed" access to cached data. This data is collected from external tables, files or from other streams. Indexed access is controlled through configuring a pipe with an index and setting index expressions on "Group By" attributes here.
...
Aggregate Attributes define the aggregated properties that are available when data is read from an aggregating Pipe. Note that Aggregate Attributes are not available on Pipes from Database Collectors (any aggregation can be performed in the query SQL), nor are they available on Pipes from File Collectors.
Possible aggregate values are counts, summations, averages and maximum or minimum values of Stream Items grouped in the Group/Order tab of the Pipe.
...
Only aggregate attributes which can be aggregated. For example, do not try to sum an attribute which contains text.
...
The following fields are available in the Basic Settings section if you set Data To Read = Custom:
...
- Only collect from same run is not ticked
- Max Stream Sets is blank or zero
- Historied is not ticked
Advanced section
The following fields are configured in the Advanced section:
...
Normally, when a pipe tries to read from an input stream that contains an incomplete stream set, PhixFlow will attempt to complete the stream set before passing data down the pipe. However if the stream is static (i.e. the stream has its 'static' flag ticked) or is effectively static (i.e. all of the pipes reading from it in this analysis run are static) then, instead of completing the stream set, an error message is produced indicating that you cannot read from this stream because it contains an incomplete stream set.
If you do not want this error message to be produced when reading from static (or effectively static) streams, but would instead prefer PhixFlow to ignore the incomplete stream sets, then you must tick this box on all pipes that will read from the input stream in this analysis run. If there are multiple pipes that read from the input stream during this analysis run and even one of the pipes does not have this box ticked then you will not be allowed to read from the stream and the error message will be produced.
Pipes which are not used in the current analysis run (for example where they lead to streams on branches of the model which are not run by the current task plan) have no effect on whether or not the error message is produced.
...
The cache is used when carrying out lookups from streams or database collectors. When doing a lookup, there are two common scenarios:
- The pipe does a single lookup onto a stream or database table to get a large number of records in one go (e.g. 10,000 records)
- The pipe does many lookups, getting a small number of records for each lookup (e.g. 10 records at a time).
In case 2, the results returned are typically based on a key value, e.g. an account number. This will be used in the filter of the pipe, if you are reading from a stream, or in the query, if you are reading from a database collector. For example, the query in a database collector will include the condition:
Code Block |
---|
WHERE AccountNumber = _out.AccountNum |
For efficiency, the records are cached (stored temporarily in memory) so that if the same set of records need to be looked up again they are readily available without going back to the database.
This field allows you to set a limit on the size of the cache. Setting a limit is important because if you do not, the cache can become very large and consume a lot of memory, which can lead to a slow down in both your tasks and those of other users of PhixFlow.
To set the cache size, try to estimate the largest number of records that the lookup pipe will return on a single read.
If you do not set a limit, it will default to the system-wide default, specified in the Maximum Pipe Cache Size in the System Tuning tab of the System Configuration.
...
bgColor | #e6f0ff |
---|---|
titleBGColor | #99c2ff |
title | Cache warnings and errors |
If a single read brings back over 90% of the specified cache size, a warning message will be logged to the console.
If a single read brings back 100% or more of the cache size, a second warning message will be generated. If the Enforce Cache Size limit flag is ticked in System Configuration, instead of a warning an error will be generated, and the analysis run will stop completely.
Code Block | ||
---|---|---|
| ||
The Pipe "stream_name.lookup_pipe_name" cache is 100% full (the cache size is 10). |
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
Every time the lookup pipe is referenced, PhixFlow calculates the values of all of the variable elements of the query or pipe filter, and checks if it already has a set of data in the cache retrieved using this set of variable values. If so the data is immediately returned from the cache. Otherwise, a new set of data is read from the stream of collector. If adding the new records to the cache would cause it to exceed the maximum cache size, previously cached results are removed to make enough room for the new results. |
...
During lookups
...
During File Export
...
Max Records To Read
...
The Execution Strategy determines how this pipe should be implemented. See the section on Directed Merge Strategy
...
Max Workers
...
This field is only available if Strategy = Directed
The maximum number of concurrent worker tasks.
If blank, this defaults to 1.
...
This field is only available if Strategy = Directed
The number of key values to read for a single worker task (which runs a single select statement).
...