...
Normally when a pipe requests data from a non-static input stream, that stream will first attempt to bring itself up to date, generating new stream sets as necessary, before supplying the data requested. However, if this field is ticked, the input stream will not run.
Mandatory
If ticked, when multiple Streams are being merged then there must be an input record from this Pipe for an output record to be generated by the output Stream.
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
If this is a push pipe with positive offsets and this flag is ticked then the notification to create another stream set will only be pushed along the pipe if the last stream set created contains at least one record. |
Multiplier
This causes the pipe to present each candidate set to the output stream in a different way than usual. The multiplier flag is on the Advanced tab of the form.
For each output record generated by a stream, the stream will get a set of records from each of its input pipes. If the multiplier flag is ticked on one of these, then the stream will generate an output record for each record from the set of records provided by the multiplier pipe. For each output record, each of the other input pipes will provide the same set of records as normal.
Filters, sorting and grouping, aggregating
Filters, sorting and grouping, and aggregating are configured through their own tabs sections on the form:
Field | Description |
---|---|
Filter | Allows the user to set up a filter on the pipe. |
Sort/Group | Specify the group/ order by attributes on the pipe. |
Aggregate attributes | Specify any aggregate attributes on the pipe. |
Advanced | Configure advanced features on the pipe. |
...
Add a clause or condition. | |
Delete a clause or condition. | |
Specifies that the value entered is a literal value. Click this icon to change this - to specify that the value entered be evaluated as an expression. | |
Specifies that the value entered is a evaluated as an PhixFlow expression. Click this icon to change this - to specify that the value entered be treated as a literal. Note : ["123", "234", "345"] looks like a literal value but it can be evaluated as an expression. | |
Open the expression in a larger editor. |
...
Filter
...
A cache extraction filter allows you to further filter the data retrieved by a pipe. These are not commonly used, but are sometimes helpful when either:
- Optimising performance on a lookup pipe when for a set of records, the record you require from the lookup depends on non-key data, e.g. the date
- When getting data from a pull pipe when the filter requires that you compare one value in each record with another; this is not possible within a standard filter.
For case 1, when using a lookup pipe, data retrieved is stored in a cache. See cache size for details. The cache extraction filter allows you, as you are processing a set of output records, to use different cached entries from the lookup for each of the records are you are processing. This is very fast compared to looking up from the source (i.e. going back to an external DB table or even another PhixFlow stream) for each output record.
E.g. you want to look up the credit rating for a customer for a set of transactions - in the output, each transaction is represented by a single output record. You create an indexed lookup pipe using CustNo as the key for the index. This means that for each new CustNo you encounter in the data, all the credit rating entries for that CustNo would be retrieved by the pipe and placed into the cache. The credit rating for each customer is fully historied, so you get a number of entries for each CustNo. To get the relevant lookup entry for each output report (each transaction), you need to compare the transaction date of the output record to the dates of credit rating entries in the cache. So to extract the relevant record, you include a cache extraction filter in the form:
Code Block |
---|
StartDate >= _out.TransDate && (EndDate <= _out.TransDate || EndDate == _NULL) |
Cache extraction filters are entered free hand.
The attribute names referenced must exist in a stream. This means that the each attribute must be one of:
- an attribute in a source stream, if you are reading from a stream
- if you are reading from an external database table, one of the fields returned by the database collector AND an attribute in the output stream - i.e. to use an attribute with the source as a database collector, there must be an attribute of matching name in the output stream
- an attribute in the destination stream, in which case you will refer to it using the format
_out.AttributeName
Filter on Current User
Views of entities that show a User or Owner attribute can be filtered to show only those records for which the Owner / User matches the current logged in user. This particularly applies to Alarms and user Tasks. To set a filter where the Owner equals the current logged in user, add a condition to the filter like this:
Owner Equals _user.name fx
Enter a list of values for an "Is In" or "Is Not In" filter
If you want to select a set of records which have a field set to one of a range of possible values you can do this by selecting the "Is in" or "Is not in" comparator from the drop down list and then simply typing the list of values into the edit box to the right of the comparator as a comma separated list like this: the filter like this:
Country Is in England, France, Germany ABC
In this case you should NOT click the ABC icon to the right of the value to convert it to an 'fx' indicating the value is a formula but should leave it specified as a literal value. If you do click the ABC icon to specify that the value entered is a formula then the value should be entered like this:
Country Is in ["England","France","Germany"] fx
...
A Pipe can be grouped and sorted by attributes of the input stream. These are set up in the Sort/Group tab of the Pipe form.
The following fields are configured on the Details tab:
...
- Exact Match: The pipe will retrieve data from its cache based on an exact match look-up with the values provided after evaluating the index expressions on the "Group By" attributes.
- Best Match: The pipe will retrieve data from its cache based on a "Best Match" look-up after evaluating the index expressions on the "Group By" attributes. Note that the last Group By Attribute with a key expression is used for the best match lookup. The index keys on any Group By attributes with a lower sequence number are used as an initial "Exact Match" to find the set of data on which to do the "Best Match". The "Best Match" is defined as the longest key value which matches the evaluated index expression.
- Near Match: The pipe will retrieve data from its cache based on a "Near Match" look-up after evaluating the index expressions on the "Group By" attributes. Note that the last Group By Attribute with a key expression is used for the near match lookup. The index keys on any Group By attributes with a lower sequence number are used as an initial "Exact Match" to find the set of data on which to do the "Best Match". When "Near Match" is selected, an additional field appears where you can enter an expression which should evaluate to a number representing the allowed number of edits (e.g. deletions, insertions, substitutions and transpositions) which can be made when comparing the result of the index expression to the index key in order to achieve a match.
For example if the index key is "Smyhte" and the result of the index expression is "Smith" this would still be a match providing that the allowed number of edits is 3 or more (i.e. substitute the 'i' for a 'y', transpose the 't' and the 'h' and then insert an 'e' at the end).
Form: Grouping Attribute Details
The following fields are configured for each grouping attribute:
...
Aggregate Attributes define the aggregated properties that are available when data is read from an aggregating Pipe. Note that Aggregate Attributes are not available on Pipes from Database Collectors (any aggregation can be performed in the query SQL), nor are they available on Pipes from File Collectors.
Possible aggregate values are counts, summations, averages and maximum or minimum values of Stream Items grouped in the Sort/Group tab of the Pipe.
...
Only aggregate attributes which can be aggregated. For example, do not try to sum an attribute which contains text.
...
Pipe Form Reference
Details tab
The following fields are configured on the Details tab:
...
This field is used to determine which Streamsets to read from the input Stream.
...
Normally when a Pipe requests data from a non-static input Stream then that Stream will first attempt to bring itself up to date, generating new Streamsets as necessary, before supplying the data requests. However, if this field is ticked, the input Stream will not attempt to do this.
...
on Current User
Sometimes when running analysis you want to select, from the source, only records belonging to the currently logged in user. To set a filter where, say, an attribute in the source Owner
equals the current logged in user, add a condition to the filter like this:
Owner
Equals _user.name
fx
Enter a list of values for an "Is In" or "Is Not In" filter
If you want to based on a list of values, use the Is in or Is not in comparators, then type the list of values into the comparison field as a comma separated list like this:
Country
Is in England, France, Germany
ABC
In this case you must NOT click the ABC icon to convert the value to an fx, because this will indicate that the value is a formula; it must be left as a literal value. If you do click the ABC icon, then the value must be entered like this:
Country
Is in ["England","France","Germany"]
fx
Cache Extraction Filter
A cache extraction filter allows you to further filter the data retrieved by a pipe. These are not commonly used, but are sometimes helpful when either:
- Optimising performance on a lookup pipe when for a set of records, the record you require from the lookup depends on non-key data, e.g. the date
- When getting data from a pull pipe when the filter requires that you compare one value in each record with another; this is not possible within a standard filter.
For case 1, when using a lookup pipe, data retrieved is stored in a cache. See cache size for details. The cache extraction filter allows you, as you are processing a set of output records, to use different cached entries from the lookup for each of the records are you are processing. This is very fast compared to looking up from the source (i.e. going back to an external DB table or even another PhixFlow stream) for each output record.
E.g. you want to look up the credit rating for a customer for a set of transactions - in the output, each transaction is represented by a single output record. You create an indexed lookup pipe using CustNo as the key for the index. This means that for each new CustNo you encounter in the data, all the credit rating entries for that CustNo would be retrieved by the pipe and placed into the cache. The credit rating for each customer is fully historied, so you get a number of entries for each CustNo. To get the relevant lookup entry for each output report (each transaction), you need to compare the transaction date of the output record to the dates of credit rating entries in the cache. So to extract the relevant record, you include a cache extraction filter in the form:
Code Block |
---|
StartDate >= _out.TransDate && (EndDate <= _out.TransDate || EndDate == _NULL) |
Cache extraction filters are entered free hand.
The attribute names referenced must exist in a stream. This means that the each attribute must be one of:
- an attribute in a source stream, if you are reading from a stream
- if you are reading from an external database table, one of the fields returned by the database collector AND an attribute in the output stream - i.e. to use an attribute with the source as a database collector, there must be an attribute of matching name in the output stream
- an attribute in the destination stream, in which case you will refer to it using the format
_out.AttributeName
Anchor | ||||
---|---|---|---|---|
|
A Pipe can be grouped and sorted by attributes of the input stream. These are set up in the Group/Order section of the Pipe form. In fact, this section is called Sort/Group for pull and push pipes, and Order/Index for lookup pipes.
The following fields are configured at the level of the pipe:
Field | Description |
---|---|
Maximum Number of Records Per Group | If a value is entered into this field, then when PhixFlow collates the input records into groups according to the specified Grouping Attribute Details (see below) once the maximum number of records have been added to the group any additional records for this group will be discarded. Records will be added to the group according to the specified sort order. This can be useful, for example, where you know that for a given set of grouping attributes you may get multiple records but you are only interested in the most recent record for that set. In this case you can configure the pipe to group by the grouping attributes, sort by an appropriate date attribute in descending order and set the Maximum Number of Records to 1. This will ensure that you only get the most recent record for the specified group when you read from this pipe. |
Index Type | This setting is only available for lookup pipes. Look-up pipes can be configured for fast "indexed" access to cached data collected from external tables, files or from other streams. Indexed access is controlled through configuring a pipe with an index and setting index expressions on grouping attributes. If the Type field on the Pipe is set to 'Look-up' then the field "Index Type" becomes available. This can have the value "None" meaning that there are no index keys or "Exact Match", "Best Match" or "Near Match" as described below:
|
Form: Grouping Attribute Details
The following fields are configured for each grouping attribute:
Field | Description |
---|---|
Attribute | Name of an attribute in the input stream. |
Order | The position of this attribute in the list of sorting/grouping attributes. |
Direction | The direction of the sort based on this attribute: Ascending; or Descending. |
Group | If this attribute is part of the candidate key set, the Group flag must be ticked. Otherwise, the attributes will be used only to sort the data in the candidate set. |
Index Expression | This field is only available for lookup pipes where you have selected indexing. If the pipe is configured as a Look-up with an index match type set, this field becomes available. Look-up pipes can be configured for fast "indexed" access to cached data. This data is collected from external tables, files or from other streams. Indexed access is controlled through configuring a pipe with an index and setting index expressions on "Group By" attributes here. |
Anchor | ||||
---|---|---|---|---|
|
Aggregate Attributes define the aggregated properties that are available when data is read from an aggregating Pipe. Note that Aggregate Attributes are not available on Pipes from Database Collectors (any aggregation can be performed in the query SQL), nor are they available on Pipes from File Collectors.
Possible aggregate values are counts, summations, averages and maximum or minimum values of Stream Items grouped in the Group/Order tab of the Pipe.
Field | Description |
---|---|
Stream Function | The Aggregate Function e.g. Count or Sum. |
Attribute | The name of the Stream Attribute to be aggregated. Note that the value in this field is not used if the Aggregate Function is Count. |
Name | A new name for the aggregated attribute. Note that this can be the same as the original Attribute. |
Order | The order of the aggregate attribute. |
Only aggregate attributes which can be aggregated. For example, do not try to sum an attribute which contains text.
Anchor | ||||
---|---|---|---|---|
|
The following fields are available on the Details tab if you set Date To Read = Custom:
...
Field | Description | |||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Mandatory If ticked, when multiple Streams are being merged then there must be an input record from this Pipe for an output record to be generated by the output Stream. If this is a push pipe with positive offsets and this flag is ticked then the notification to create another stream set will only be pushed along the pipe if the last stream set created contains at least one record. | ||||||||||||||||||||||||||||||||
Multiplier | When processing data, a Stream first constructs CandidateSets ready for processing from its (non-Multiplier) input Pipes. For each Multiplier Pipe, the Stream then multiplies each CandidateSet by creating a copy of the original CandidateSet for each row returned from the Multiplier Pipe. Note that each resulting CandidateSet contains the original CandidateSet plus one row from the Multiplier Pipe. | |||||||||||||||||||||||||||||||
Execution Strategy | The Execution Strategy determines how this pipe should be implemented. See the section on Directed Merge Strategy | |||||||||||||||||||||||||||||||
Max Workers | The maximum number of concurrent worker tasks. If blank, this defaults to 1. | |||||||||||||||||||||||||||||||
Worker Size | The number of key values to read for a single worker task (which runs a single select statement). If blank, this defaults to 1000. This is the maximum value that can be used when reading from an Oracle database. | |||||||||||||||||||||||||||||||
| The cache is used when carrying out lookups from streams or database collectors. When doing a lookup, there are two common scenarios:
In case 2, the results returned are typically based on a key value, e.g. an account number. This will be used in the filter of the pipe, if you are reading from a stream, or in the query, if you are reading from a database collector. For example, the query in a database collector will include the condition:
For efficiency, the records are cached (stored temporarily in memory) so that if the same set of records need to be looked up again they are readily available without going back to the database. This field allows you to set a limit on the size of the cache. Setting a limit is important because if you do not, the cache can become very large and consume a lot of memory, which can lead to a slow down in both your tasks and those of other users of PhixFlow. To set the cache size, try to estimate the largest number of records that the lookup pipe will return on a single read. If you do not set a limit, it will default to the system-wide default, specified in the Maximum Pipe Cache Size in the System Tuning tab of the System Configuration.
| |||||||||||||||||||||||||||||||
Buffer Size | The buffer size used to perform the stream calculation. If a large amount of data is being processed, then setting a large buffer size will give better performance. | |||||||||||||||||||||||||||||||
Allow Incomplete Stream Sets | Normally, when a pipe tries to read from an input stream that contains an incomplete stream set, PhixFlow will attempt to complete the stream set before passing data down the pipe. However if the stream is static (i.e. the stream has its 'static' flag ticked) or is effectively static (i.e. all of the pipes reading from it in this analysis run are static) then, instead of completing the stream set, an error message is produced indicating that you cannot read from this stream because it contains an incomplete stream set. If you do not want this error message to be produced when reading from static (or effectively static) streams, but would instead prefer PhixFlow to ignore the incomplete stream sets, then you must tick this box on all pipes that will read from the input stream in this analysis run. If there are multiple pipes that read from the input stream during this analysis run and even one of the pipes does not have this box ticked then you will not be allowed to read from the stream and the error message will be produced. Pipes which are not used in the current analysis run (for example where they lead to streams on branches of the model which are not run by the current task plan) have no effect on whether or not the error message is produced. | |||||||||||||||||||||||||||||||
Data Expected | This field is available when the Pipe Type is Push or Pull. This flag allows the user to specify that the pipe is expecting to receive data. If ticked but no data is received this is treated as an error. | |||||||||||||||||||||||||||||||
Pipe View | The pipe view is used to limit which fields are retrieved down the pipe and in what order, and in some circumstances how each field is to be formatted. You can select from any of the views that have been configured on the source stream. Please note that any sorting or filtering of records will have to be applied directly on the pipe, and will not be inherited from the pipe view. The pipe view is used in three contexts. During Look UpsPipe views can also be used on lookup pipes to limit the fields that are returned by the lookup request. This is most useful in the scenario where the you want to read and cache data on a lookup pipe from a stream that has lots of attributes but where only a small number of attributes are actually required. You can simply create a new view on the source stream listing only the attributes needed, then specify it as the pipe view on the lookup pipe. Only those attributes specified on the view will then be loaded.Without a pipe view, the pipe will load and cache all of the attributes from the stream which may consume a significant amount of free memory if there are a large number of records. During File ExportWhen sending data to a file exporter only those fields specified on the pipe view will be exported. If no pipe view is supplied then all fields will be exported. During Drill DownWhen drilling down from an alarm or stream item the pipe view is only used to determine which attributes from the source stream should be shown in the drill down display and in what order. | |||||||||||||||||||||||||||||||
Pipe Exporter | An exporter can be selected from the set of Database Exporters configured for the input stream. This exporter can then be used from the Drill Down View. This feature is useful when PhixFlow is used to 'recommend' a set of updates. By configuring an alarm to be generated when a set of recommendations is made, the user can drill down through the alarm to see the list of recommendations and then hit the exporter icon to apply them. Any filters applied on the pipe will be applied when the data is pushed to the pipe exporter, so it is possible that not all of the data in the grid will be exported - some records may be rejected by the filter. | |||||||||||||||||||||||||||||||
Max Records To Read | The maximum number of records that should be read down this pipe. The pipe may read more than this number of records if it is configured to carry out multiple reads simultaneously e.g. if it is connected to a File Collector which has been configured to read multiple files simultaneously or if this pipe strategy is "Directed" with multiple workers. |
...
Anchor | ||||
---|---|---|---|---|
|
Pipe Form Reference
Details tab
The following fields are configured on the Details tab:
...