Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Panel
bgColor#e6f0ff
titleBGColor#99c2ff

A pipe joining a datasource to a data collector has no details to edit. All the configuration for the output data set occurs in the collector - either a database collector for a database datasource, or an HTTP collector for an HTTP datasource.

Basic Settings

FieldDescription
Name

Enter a name. The name is used to refer to the pipe in other model elements. Pipe names default to in.

The name:

  • must start with a letter
  • must not be an Attribute Function name.
  • must not include special characters, except underscore _
Enabled

Insert excerpt
_check_box_untick
_check_box_untick
nopaneltrue
 to prevent the pipe from being used during an analysis run.

Insert excerpt
_check_box_ticked
_check_box_ticked
nopaneltrue

Static Insert excerpt_check_box_untick_check_box_unticknopaneltrue Insert excerpt_check_box_ticked_check_box_tickednopaneltrueMandatory

 to indicate the pipe properties are complete and the pipe is ready to be used. 

Static

Normally when a pipe requests data from a non-static input stream, that stream will first attempt to bring itself up to date, generating new stream sets as necessary, before supplying the data requested. However, if this field is ticked, the input stream will not run. Pipes from collectors cannot be marked as static.

Insert excerpt
_check_box_untick
_check_box_untick
nopaneltrue

Insert excerpt_check_

 when the pipe requests data from a non-static input stream, that stream will first attempt to bring itself up to date, generating new stream sets as necessary, before supplying the data requested.

Insert excerpt
_check_box_ticked
_check_box_ticked
nopaneltrue

Multiplier

  to prevent the input stream from updating itself. The pipe will pull the existing data from the input stream.

Pipes from collectors cannot be marked as static.

Mandatory

Insert excerpt
_check_box_untick
_check_box_untick
nopaneltrue
  

Insert excerpt
_check_box_ticked
_check_box_ticked
nopaneltrue

Type

Select:

  • Pull: pull pipes are the most common type in PhixFlow - they "pull" data from the input to the output. Pull pipes are shown as solid arrows on models.
  • Look-up: look-up pipes are used to enrich data. Typically, you will have one of more pull pipes to supply the base data for an output, and if needed one or more look-up pipes to enrich the base data with values from additional inputs. Look-up pipes are shown as dashed lines on models.
  • Push: data is "pushed" rather than "pulled" into the output stream. Push pipes are most commonly used when sending data from streams to exporters (File Exporters, Database ExportersHTTP Exporters). Push pipes are shown as dotted lines on models.
Data to Read

Select the type of input data to use.

  • Latest: supply data from the current run (the latest stream set). This is the mostly commonly used option.
  • Previous: supply data from the previous run (the previous stream set). This is used when you are comparing data for the current run with data from the previous run, for example, today's data with yesterday's.
  • All: supply data from all runs (all stream sets). 
  • All Previous: supply data from all runs except the current run (all stream sets except the latest stream set).
  • Same Run: this option should only be used where the input and output streams are set to Period: Transactional. The pipe will only collect data from inputs in the same analysis run. This configuration support several analysis runs going on at the same time without interfering with each other. 
  • Custom: select this option to display additional settings, described in Advanced Pipe Configuration, below. We recommend that you only use the custom settings when directed to by PhixFlow consultants or support.
Read Future DataOutput

Data To Read

...

to indicate that, when multiple streams are being merged, there must be an input record from this pipe for an output record to be generated by the output stream.

If this is a push pipe with positive offsets and this option is ticked then the notification to create another stream set will only be pushed along the pipe if the last stream set created contains at least one record. This causes the pipe to present each candidate set to the output stream in a different way than usual.

Multiplier

Insert excerpt
_check_box_untick
_check_box_untick
nopaneltrue
 is the default.

Insert excerpt
_check_box_ticked
_check_box_ticked
nopaneltrue
 so that, for each output record generated by a stream, the stream will get a set of records from each of its input pipes. If the multiplier flag is ticked on one of these, then the stream will generate an output record for each record from the set of records provided by the multiplier pipe. For each output record, each of the other input pipes will provide the same set of records as normal.

Type

Select:

  • Pull: pull pipes are the most common type in PhixFlow - they "pull" data from the input to the output. Pull pipes are shown as solid arrows on models.
  • Look-up: look-up pipes are used to enrich data. Typically, you will have one of more pull pipes to supply the base data for an output, and if needed one or more look-up pipes to enrich the base data with values from additional inputs. Look-up pipes are shown as dashed lines on models.
  • Push: data is "pushed" rather than "pulled" into the output stream. Push pipes are most commonly used when sending data from streams to exporters (File Exporters, Database ExportersHTTP Exporters). Push pipes are shown as dotted lines on models.
Data to Read

Select the type of input data to use.

  • Latest: supply data from the current run (the latest stream set). This is the mostly commonly used option.
  • Previous: supply data from the previous run (the previous stream set). This is used when you are comparing data for the current run with data from the previous run, for example, today's data with yesterday's.
  • All: supply data from all runs (all stream sets). 
  • All Previous: supply data from

...

  • all runs except the current run (all stream sets except the latest stream set).
  • Same Run: this option should only be used where the input and output streams are set to Period: Transactional. The pipe will only collect data from inputs in the same analysis run. This configuration support several analysis runs going on at the same time without interfering with each other. 
  • Custom: select this option to display additional settings, described in the Custom Data to Read section, below. We recommend that you only use the custom settings when directed to by PhixFlow consultants or support.
Read Future Data

Use this option to exclude or include input streams sets that have future dates relative to the stream set you are generating. For details about how future stream sets occur, see Managing Future Stream Sets, above.

Insert excerpt
_check_box_untick
_check_box_untick
nopaneltrue
 to exclude future stream sets from this analysis run. This is the default.

Insert excerpt
_check_box_ticked
_check_box_ticked
nopaneltrue
 to include future stream sets in this analysis run. For example, for a stream with Period: Transactional , you will want to include new streams sets that are being added to the input stream after your analysis run starts.

Output


Panel
bgColor#e6f0ff
titleBGColor#99c2ff
titleManaging Future Stream Sets

Anchor
future
future
In some circumstances the input stream may have stream sets that have dates in the future relative to the stream set being generated for the output stream. This may happen, for example, if:

  1. you roll-back some stream sets on the output stream
  2. but do not roll-back the corresponding stream sets on the input stream
  3. and then request that the output stream is brought up to date.

Some of the stream sets on the input stream will have dates in the future relative to some of the stream sets you are rebuilding.By default, the Read Future Data checkbox is not ticked. This means pipes ignore any stream sets with dates in the future relative to some of the stream set sets you are generating. You want to ignore future stream sets when you rebuild an old stream set, because you want the pipe to retrieve the same data on the rerun as it retrieved when the stream set was first built.When you run analysis on a stream with a transactional period, it is possible that as your analysis is still running, a different run can start and complete. This run can generate additional stream sets on the input stream with a future data relative to the date of the rebuilding.

By default, the Read Future Data checkbox is not ticked. This means pipes ignore any stream sets with dates in the future relative to the stream set you are generating. For transactional input streams, You want to ignore future stream sets when you rebuild an old stream set, because you want the pipe to use these future streams. To do this, tick the Read Future Data checkbox.

Static

Normally when a pipe requests data from a non-static input stream, that stream will first attempt to bring itself up to date, generating new stream sets as necessary, before supplying the data requested. However, if this field is ticked, the input stream will not run. Pipes from collectors cannot be marked as static.

Mandatory

If ticked, when multiple Streams are being merged then there must be an input record from this Pipe for an output record to be generated by the output Stream.

Panel
bgColor#e6f0ff
titleBGColor#99c2ff
titleAdvanced considerations - force multiple analysis runs

If this is a push pipe with positive offsets and this flag is ticked then the notification to create another stream set will only be pushed along the pipe if the last stream set created contains at least one record.

Multiplier

This causes the pipe to present each candidate set to the output stream in a different way than usual. The multiplier flag is on the Advanced tab of the form.

For each output record generated by a stream, the stream will get a set of records from each of its input pipes. If the multiplier flag is ticked on one of these, then the stream will generate an output record for each record from the set of records provided by the multiplier pipe. For each output record, each of the other input pipes will provide the same set of records as normal.

Filters, sorting and grouping, aggregating

Filters, sorting and grouping, and aggregating are configured through their own sections on the form:

...

The data being delivered by a pipe can be filtered.

Filters are made up of a set of clauses; each clause in turn contains a number of conditions. These conditions must be satisfied for data to be passed through the pipe.

Form Icons

The form provides the following buttons:

...

Image Removed

...

Add a clause or condition.

...

Image Removed

...

Delete a clause or condition.

...

Image Removed

...

Specifies that the value entered is a literal value. Click this icon to change this - to specify that the value entered be evaluated as an expression.

...

Image Removed

...

Specifies that the value entered is a evaluated as an PhixFlow expression. Click this icon to change this - to specify that the value entered be treated as a literal. Note : ["123", "234", "345"] looks like a literal value but it can be evaluated as an expression.

...

Image Removed

...

Filter on Current User

Sometimes when running analysis you want to select, from the source, only records belonging to the currently logged in user. To set a filter where, say, an attribute in the source Owner equals the current logged in user, add a condition to the filter like this:

Owner Equals _user.name fx

Enter a list of values for an "Is In" or "Is Not In" filter

If you want to based on a list of values, use the Is in or Is not in comparators, then type the list of values into the comparison field as a comma separated list like this:

Country Is in England, France, Germany ABC

In this case you must NOT click the ABC icon to convert the value to an fx, because this will indicate that the value is a formula; it must be left as a literal value. If you do click the ABC icon, then the value must be entered like this:

Country Is in ["England","France","Germany"] fx

...

retrieve the same data on the rerun as it retrieved when the stream set was first built.

When you run analysis on a stream with a transactional period, it is possible that as your analysis is still running, a different run can start and complete. This run can generate additional stream sets on the input stream with a future data relative to the date of the stream set you are generating. For transactional input streams, you want the pipe to use these future streams. To do this, tick the Read Future Data checkbox.

Filter



Anchor
pipeFilterEditor
pipeFilterEditor
Filter Editor

The data being delivered by a pipe can be filtered.

Filters are made up of a set of clauses; each clause in turn contains a number of conditions. These conditions must be satisfied for data to be passed through the pipe.

Form Icons

The form provides the following buttons:

Image Added

Open the expression in a larger editor.

Filter on Current User

Sometimes when running analysis you want to select, from the source, only records belonging to the currently logged in user. To set a filter where, say, an attribute in the source Owner equals the current logged in user, add a condition to the filter like this:

Owner Equals _user.name fx

Enter a list of values for an "Is In" or "Is Not In" filter

If you want to based on a list of values, use the Is in or Is not in comparators, then type the list of values into the comparison field as a comma separated list like this:

Country Is in England, France, Germany ABC

In this case you must NOT click the ABC icon to convert the value to an fx, because this will indicate that the value is a formula; it must be left as a literal value. If you do click the ABC icon, then the value must be entered like this:

Country Is in ["England","France","Germany"] fx

Field
Description
Include History Records

Insert excerpt
_check_box_untick
_check_box_untick
nopaneltrue
 tbc??

Insert excerpt
_check_box_ticked
_check_box_ticked
nopaneltrue
 tbc??

Condition

Select one of the options

  • Where ALL...
  • Where ANY...

To add more conditions, hover your mouse pointer over this field to display the 

Insert excerpt
_add
_add
nopaneltrue
 button.

Insert excerpt
_add
_add
nopaneltrue

Hover your mouse pointer over the Condition field to display this button.
Add another condition to your filter.

Clause

Select an option from the list. PhixFlow adds more fields where you can:

  • select how the filter matches (for example, equals, contains, is null)
  • enter a string that the filter uses to match the data. The string can be an expression or a literal string.
Insert excerpt
_delete
_delete
nopaneltrue

Hover your mouse pointer over a filter clause to display this button.
Delete the selected clause or condition from the filter. 

Insert excerpt
_filter_literal
_filter_literal
nopaneltrue

Indicates the value entered is a literal value. Click this icon to treat the value as an expression.

Insert excerpt
_filter_expression
_filter_expression
nopaneltrue

Indicates the value entered is an expression. Click this icon to treat the value to a literal string.

Note: ["123", "234", "345"] looks like a literal value but it can be evaluated as an expression.

Insert excerpt
_expand
_expand
nopaneltrue

Cache Extraction FilterClick Image Added to open the expression in a larger editor.

A cache extraction filter allows you to further filter the data retrieved by a pipe. These are not commonly used, but are sometimes helpful when either:

  1. Optimising performance on a lookup pipe when for a set of records, the record you require from the lookup depends on non-key data, e.g. the date
  2. When getting data from a pull pipe when the filter requires that you compare one value in each record with another; this is not possible within a standard filter.

For case 1, when using a lookup pipe, data retrieved is stored in a cache. See cache size for details. The cache extraction filter allows you, as you are processing a set of output records, to use different cached entries from the lookup for each of the records are you are processing. This is very fast compared to looking up from the source (i.e. going back to an external DB table or even another PhixFlow stream) for each output record.

E.g. you want to look up the credit rating for a customer for a set of transactions - in the output, each transaction is represented by a single output record.  You create an indexed lookup pipe using CustNo as the key for the index. This means that for each new CustNo you encounter in the data, all the credit rating entries for that CustNo would be

...

Code Block
StartDate >= _out.TransDate && (EndDate <= _out.TransDate || EndDate == _NULL)

Cache extraction filters are entered free hand.

The attribute names referenced must exist in a stream. This means that the each attribute must be one of:

...

retrieved by the pipe and placed into the cache. The credit rating for each customer is fully historied, so you get a number of entries for each CustNo. To get the relevant lookup entry for each output report (each transaction), you need to compare the transaction date of the output record to the dates of credit rating entries in the cache. So to extract the relevant record, you include a cache extraction filter in the form:

Code Block
StartDate >= _out.TransDate && (EndDate <= _out.TransDate || EndDate == _NULL)

Cache extraction filters are entered free hand.

The attribute names referenced must exist in a stream. This means that the each attribute must be one of:

  1. an attribute in a source stream, if you are reading from a stream

Sort/Group

FieldDescription
FilterAllows the user to set up a filter on the pipe. Also allows to set the flag to Include Audit Records. If not set, superseded records will be filtered out.
Sort/GroupSpecify the group/ order by attributes on the pipe.
Aggregate attributesSpecify any aggregate attributes on the pipe.
AdvancedConfigure advanced features on the pipe.

Aggregate Attributes

  1. if you are reading from an external database table, one of the fields returned by the database collector AND an attribute in the output stream - i.e. to use an attribute with the source as a database collector, there must be an attribute of matching name in the output stream
  2. an attribute in the destination stream, in which case you will refer to it using the format _out.AttributeName

...