Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Forms: Pipe

A pipe is a connector that links two elements in a PhixFlow model. A pipe joining a datasource to a data collector has no editable details.

Form: Pipe Details

The following fields are configured on the Details tab:

...

  • Push: data is 'pushed' rather than 'pulled' into the output stream. Every time data is written to the input stream, the pipe 'informs' the output stream and the Analysis Engine will attempt to run stream generation on the output stream. Push pipes are shown as blue lines on modelling screens.
  • Pull: This type of pipe will pass data along as it is calculated which may be before the streamset is complete or any of the records have been committed to the database. This may improve performance over other pipes which wait until the input streams have committed their data to the database before reading the data back in again (which allows the database to apply the filtering and aggregation rules). Because PUSH pipes don't get their data from the database, but instead receive it direct from the input stream, any filters or aggregation specified on the pipe will be ignored. Pull pipes are shown as blue lines on modelling screens.
  • Look-up: used to reference data, see lookup for more details. Look-up pipes are shown as dotted lines on modelling screens.

...

This field is used to determine which Streamsets to read from the input Stream. There are 4 options available:

  • Latest: This will cause the pipe to read only the latest Streamset from the input Stream.
  • Previous: This will cause the pipe to read the Streamset before the latest Streamset on the input Stream.
  • All: This will cause the pipe to read all Streamsets from the input Stream. However, In some circumstances the input Stream may have Streamsets that have dates in the future relative to the Streamset the output Stream is creating. This may happen for example if you have rolled back a number of Streamsets on the output Stream but have not rolled back the corresponding Streamsets on the input Stream and have then requested that one of the rolled back Streamsets be rebuilt. Some of the Streamsets on the input Stream will then have dates in the future relative to the Streamset you are rebuilding.
    By default the pipe will ignore any Streamsets with dates in the future relative to the Streamset you are generating so that if you are rebuilding an old Streamset the pipe will retrieve the same data on the rerun as it retrieved when the Streamset was first built.
    Similarly if you are running a Transactional Stream then it is possible that while your analysis run is taking place, other analysis runs which started after yours may have managed to complete before yours thereby generating additional Streamsets on the input Stream with a future data relative to the date of the Streamset you are generating.
    For Transactional input Streams it is possible to tell the pipe not to ignore these future Streamsets by ticking the Read Future Data tickbox on the Advanced tab.
  • Custom: If this option is selected then you need to specify which input Streamsets to read by configuring combinations of the following fields on the Advanced tab:
    • Read Future Data
    • Only collect from the same run
    • Max Stream Sets
    • Historied
    • From Data Offset
    • To Date Offset

...

Normally when a Pipe requests data from a non-static input Stream then that Stream will first attempt to bring itself up to date, generating new Streamsets as necessary, before supplying the data requests. However, if this field is ticked, the input Stream will not attempt to do this.

...

If this flag is not ticked then it is an indication to PhixFlow that the Stream is not ready to be used during any analysis runs and should be therefore be ignored.

The following fields are configured on the Advanced tab:

...

Mandatory

...

If ticked, when multiple Streams are being merged then there must be an input record from this Pipe for an output record to be generated by the output Stream.

If this is a push pipe with positive offsets and this flag is ticked then the notification to create another stream set will only be pushed along the pipe if the last stream set created contains at least one record.

...

The Execution Strategy determines how this pipe should be implemented.

Where this pipe is a push or pull pipe into a Merge Stream, the Default Execution Strategy is to select all stream items from the input Stream sorted by the Group By attributes, then to read items from all input pipes simultaneously, constructing candidate sets from items with matching key values.

Where the Directed Execution Strategy is applied to a pipe (the pipe must not be Mandatory), the other pipes with the Default Strategy operate as above, each being sorted then merged to generate a sequence of candidate sets; the Directed pipe then runs worker tasks to select the additional items by matching key value. These selects are batched up so that each worker reads items for many key values in a single select (see Worker Size), and many workers are run in parallel (see Max Workers).

In general, the Directed strategy should only be used where

  • the number of items in the source Stream is so large that the sorting phase of the Default strategy takes too long, or
  • only a small subset of the items are needed (the majority being discarded because they have key values that don't match the key values read on one of the other mandatory pipes).

Changing the Execution Strategy will make the Merge faster or slower depending on the input data and the details of the input and output Streams, but will not change the business logic of the Merge (i.e. which input items are grouped into candidate sets).

If the input to the pipe is a Collector, the list of key values is made available as _keyList. This will typically be used with an in clause in the Collector query. For example:

select * from customer where account_num in ({_keyList}) order by account_num

...

The maximum number of concurrent worker tasks.

If blank, this defaults to 1.

...

The number of key values to read for a single worker task (which runs a single select statement).

If blank, this defaults to 1000. This is the maximum value that can be used when reading from an Oracle database.

...

When doing a lookup, there are two common scenarios:

  1. The pipe does a single lookup onto a stream or database table to get a large number of records in one go (e.g. 1000 records)
  2. The pipe does many lookups, getting a small number of records for each lookup (e.g. 10 records at a time).

Try to estimate the largest number of records that the lookup pipe reads on a single read from a stream or database collector.

The cache is used when carrying out lookups from streams or database collectors. During a lookup, PhixFlow will retrieve records in sets that match the filter on the pipe, e.g.

Code Block
WHERE AddresssLine1 = _out.Address

For efficiency, the records are cached (stored temporarily in memory) so that if the same set of records need to be looked up again they are readily available without going back to the database.

There is a limit to how many records can be stored in the cache. The Cache Size field allows you to specify this limit. If no value is set, it will default to the system-wide default, specified in the maximum pipe cache size in the System Tuning tab of the System Configuration dialog.

Panel
bgColor#e6f0ff
titleBGColor#99c2ff
titleTechnical Breakout

If a single read brings back a number of records exceeding 90% of the specified cache size, a warning message will be logged to the console.

If a single read brings back 100% or more of the cache size, and the enforce cache size limit flag is ticked in the system configuration, then the analysis run will stop completely.

The Pipe "stream_name.lookup_pipe_name" cache is 100% full (the cache size is 10).

Code Block
titleError: Cache Size Limit Exceeded
The Pipe "stream_name.lookup_pipe_name" cache is 100% full (the cache size is 10).

A model administrator can use this information to make an informed decision about the design of the model.

...

The offset applied to the start of the collection period, relative to the period in the output stream that requires populating.

The units are the period of the output stream, that is, if the output stream has a daily period, then setting from date offset = -1.0 means that the start of the collection period will be 1 day earlier than the start of the period in the output stream that is being calculated.

If this is a push pipe then a positive offset can be input. This will tell the stream to run again and generate another stream set.

...

The offset applied to the end of the collection period, relative to the period in the output stream that requires populating.

The units are the period of the output stream, that is, if the output stream has a daily period, then setting to date offset = -1.0 means that the end of the collection period will be 1 day earlier than the end of the period in the output stream that is being calculated.

If this is a push pipe then a positive offset can be input. This will tell the stream to run again and generate another stream set.

...

  • Only collect from the same run
  • Max Stream Sets (this may also be set to zero)
  • Historied

 

...

During Look Ups

...

During File Export

...

During Drill Down

...

The following fields are configured through separate tabs on the form:

...

Form Icons

The form provides the standard form icons.

The form also provides the following icons on the Filter tab:

Image Removed

Adds a clause to the filter.

Image Removed

Deletes the selected clause or condition from the filter.

Image Removed

Adds a condition to a clause of the filter.

The form also provides the following icons on both the Sort/Group and Aggregate Attributes tabs:

Image Removed

Shows the list of attributes that can be added as sort/group or aggregate attributes.

Image Removed

Deletes the selected object from the list.

Image Removed

Adds an object to the list.

See Also

...

Insert excerpt
_Banners
_Banners
nameanalysis
nopaneltrue


This page is for data modellers who need to edit the properties of a pipe.

Overview

A pipe is a connector that links two objects in a PhixFlow model and sends data from the input object to the output object. Pipes allows you to control which attributes and which records from the input are delivered by to the output. With default configuration the pipe pass all attributes and records from the current run.

The pipe must be enabled to make it active.

For advanced configuration, see Advanced Pipe Configuration.

Tip

A pipe joining a datasource to a data collector has no details to edit. All the configuration for the output data set occurs in the collector:

Insert excerpt
_property_toolbar
_property_toolbar
nopaneltrue

Insert excerpt
_property_tabs
_property_tabs
namebasic-h
nopaneltrue

Insert excerpt
_parent
_parent
nopaneltrue

Basic Settings

FieldDescription
Name

Enter a name. The name is used to refer to the pipe in other model elements. Pipe names default to in.

The name:

  • must start with a letter
  • must not be an Attribute Function name.
  • must not include special characters, except underscore _
Enabled

Insert excerpt
_check_box_untick
_check_box_untick
nopaneltrue
 to prevent the pipe from being used during an analysis run.

Insert excerpt
_check_box_tick
_check_box_tick
nopaneltrue
 to indicate the pipe properties are complete and the pipe is ready to be used. 

Static

Normally when a pipe requests data from a non-static input table, that table will first attempt to bring itself up to date, generating new recordsets as necessary, before supplying the data requested. However, if this field is ticked, the input table will not run. Pipes from collectors cannot be marked as static.

Insert excerpt
_check_box_untick
_check_box_untick
nopaneltrue
 when the pipe requests data from a non-static input table, that table will first attempt to bring itself up to date, generating new recordsets as necessary, before supplying the data requested.

Insert excerpt
_check_box_tick
_check_box_tick
nopaneltrue
  to prevent the input table from updating itself. The pipe will pull the existing data from the input table.

Pipes from collectors cannot be marked as static.

In the model, hover over a pipe to display an icon that shows 

Mandatory

Insert excerpt
_check_box_untick
_check_box_untick
nopaneltrue
  

Insert excerpt
_check_box_tick
_check_box_tick
nopaneltrue
to indicate that, when multiple tables are being merged, there must be an input record from this pipe for an output record to be generated by the output table.

If this is a push pipe with positive offsets and this option is ticked then the notification to create another recordset will only be pushed along the pipe if the last recordset created contains at least one record. This causes the pipe to present each candidate set to the output table in a different way than usual.

Multiplier

Insert excerpt
_check_box_untick
_check_box_untick
nopaneltrue
 is the default.

Insert excerpt
_check_box_tick
_check_box_tick
nopaneltrue
 so that, for each output record generated by a table, the table will get a set of records from each of its input pipes. If the multiplier flag is ticked on one of these, then the table will generate an output record for each record from the set of records provided by the multiplier pipe. For each output record, each of the other input pipes will provide the same set of records as normal.

Type

Select:

  • Pull: pull pipes are the most common type in PhixFlow - they "pull" data from the input to the output. Pull pipes are shown as solid arrows on models.
  • Look-up: look-up pipes are used to enrich data. Typically, you will have one of more pull pipes to supply the base data for an output, and if needed one or more look-up pipes to enrich the base data with values from additional inputs. Look-up pipes are shown as dashed lines on models.
  • Push: data is "pushed" rather than "pulled" into the output table. Push pipes are most commonly used when sending data from tables to exporters (File Exporters, Database ExportersHTTP Exporters). Push pipes are shown as dotted lines on models.
Data to Read

Select the type of input data to use.

  • Latest: supply data from the current run (the latest recordset). This is the mostly commonly used option.
  • Previous: supply data from the previous run (the previous recordset). This is used when you are comparing data for the current run with data from the previous run, for example, today's data with yesterday's.
  • All: supply data from all runs (all recordsets). 
  • All Previous: supply data from all runs except the current run (all recordsets except the latest recordset).
  • Same Run: this option should only be used where the input and output tables are set to Period: Transactional. The pipe will only collect data from inputs in the same analysis run. This configuration support several analysis runs going on at the same time without interfering with each other. 
  • Custom: select this option to display additional settings, described in the Custom Data to Read section, below. We recommend that you only use the custom settings when directed to by PhixFlow consultants or support.
Read Future Data

Use this option to exclude or include input tables sets that have future dates relative to the recordset you are generating. For details about how future recordsets occur, see Managing Future Recordsets, below.

Insert excerpt
_check_box_untick
_check_box_untick
nopaneltrue
 to exclude future recordsets from this analysis run. This is the default.

Insert excerpt
_check_box_tick
_check_box_tick
nopaneltrue
 to include future recordsets in this analysis run. For example, for a table with Period: Transactional , you will want to include new recordsets that are being added to the input table after your analysis run starts.

Input

Enter the name of the object at the start of the pipe. 

Insert excerpt
_properties_show_other
_properties_show_other
nopaneltrue

OutputThe name of the table at the end of the pipe.

Anchor
future
future
Managing Future Recordsets

In some circumstances the input table may have recordsets that have dates in the future relative to the recordset being generated for the output table. This may happen, for example, if:

  1. you roll-back some recordsets on the output table
  2. but do not roll-back the corresponding recordsets on the input table
  3. and then request that the output table is brought up to date.

Some of the recordsets on the input table will have dates in the future relative to some of the recordsets you are rebuilding.

By default, the Read Future Data check box is not ticked. This means pipes ignore any recordsets with dates in the future relative to the recordset you are generating. You want to ignore future recordsets when you rebuild an old recordset, because you want the pipe to retrieve the same data on the rerun as it retrieved when the recordset was first built.

When you run analysis on a table with a transactional period, it is possible that as your analysis is still running, a different run can start and complete. This run can generate additional recordsets on the input table with a future data relative to the date of the recordset you are generating. For transactional input tables, you want the pipe to use these future tables. To do this, tick the Read Future Data check box.

Filter

Filters are made up of a set of clauses; each clause in turn contains a number of conditions. These conditions must be satisfied for data to be passed through the pipe.

Tip

When the pipe Type is Lookup, the filter controls which data will be cached in memory.


Field
Description

Cache Size

Anchor
pipeCacheSize
pipeCacheSize


Available when Type is Lookup. The default value is set in System Configuration.

For lookup pipes, PhixFlow uses the pipe cache when it looks-up data from tables or database collectors. For efficiency, the records are cached (stored temporarily in memory) so that if the same set of records need to be looked up again they are readily available without going back to the database.

Enter a number to set a limit on the data cache size available for the pipe. You need to estimate the largest number of records that the lookup pipe will return on a single read. Check whether PhixFlow is looking up:

  • many records
    The pipe does a single lookup onto a table or database table to get a large number of records in one go, for example 10,000 records.
  • few records may times
    The pipe does many lookups, getting a small number of records for each lookup, for example, 10 records at a time. In this case, PhixFlow is usually using a key value, such as an account number, to get the data. The key value is:
    • for a table - the attribute used to filter the pipe
    • for a database collector - a condition in the database query. For example:
Code Block
WHERE AccountNumber = _out.AccountNum

If you do not set a limit for the cache, PhixFlow uses the system default set in System Configuration →  System Tuning → Maximum Pipe Cache Size.

Expand
titleWarnings and Errors

In the log for an analysis run, which is available in the system console, PhixFlow reports warnings when a single read returns:

  • over 90% of the specified cache size
  • 100% or more of the cache size.

PhixFlow reports an error and stops the analysis run when:

  • a single read returns 100% or more of the cache size
  • and the System Configuration → Enforce Cache Size limit flag is ticked.

Code Block
titleError Message: Cache Size Limit Exceeded
The Pipe "table_name.lookup_pipe_name" cache is 100% full (the cache size is 10).



Expand
titleMore cache details

Every time the lookup pipe is referenced, PhixFlow calculates the values of all of the variable elements of the query or pipe filter, and checks if it already has a set of data in the cache retrieved using this set of variable values. If so the data is immediately returned from the cache. Otherwise, a new set of data is read from the table of collector. If adding the new records to the cache would cause it to exceed the maximum cache size, previously cached results are removed to make enough room for the new results.


Pipe View

Use this option to look up data from attributes that are present in a view on the input table.

Select a view from the list. If the input table has no views, the list will be empty.

Note

Sorting or filtering of records must be set directly on the pipe. It is not inherited from the pipe view.

Use the pipe view to limit the attributes that the pipe reads when a table has lots of attributes containing many data records but you only need data from a few attributes. Only the data for the attributes in the view are sent to the output table.

Pipe views are very useful:

  • during lookups a pipe loads and caches all of the attributes from the table. This can use a lot of memory, especially when there are many records. 
  • during file export all data records from all attributes are exported.

To set up a pipe view:

  1. Create a new view on the source table. In the view, only add the attributes you need.
  2. In the pipe Pipe View option, select the pipe view.
  3. Run analysis. PhixFlow only looks-up or exports data from the attributes specified on the view.
Expand
titleUsing pipe views with file exporters

If a File Exporter is configured to export to Excel or to HTML:

  • and it has an Excel template, PhixFlow uses this template and does not use any template specified in the pipe view.
  • but it does not have an Excel template, PhixFlow will use the Excel template specified in the pipe view (if there is one)
  • and the pipe view is a chart view, PhixFlow will export a PNG picture of the chart.


Include History Records

Insert excerpt
_check_box_untick
_check_box_untick
nopaneltrue
 to automatically filter out superseded records.

Insert excerpt
_check_box_tick
_check_box_tick
nopaneltrue
 to include superseded records.

Condition

Select one of the options

  • Where ALL...
  • Where ANY...

To add more conditions, hover your mouse pointer over this field to display the  button and add another condition to your filter. 

Clause

Select an option from the list. PhixFlow adds more fields where you can:

  • select how the filter matches (for example, equals, contains, is null)
  • enter a string that the filter uses to match the data. The string can be an expression or a literal string.
   Filter Icons

Hover your mouse pointer over conditions or clauses to display:

  • Insert excerpt
    _new
    _new
    nopaneltrue
      another condition or clause to the filter
  • Insert excerpt
    _delete
    _delete
    nopaneltrue
     the selected clause or condition from the filter. 

  • Insert excerpt
    _filter_literal
    _filter_literal
    nopaneltrue
     indicates the value entered is a literal value. Click this icon to treat the value as an expression.
  • Insert excerpt
    _filter_expression
    _filter_expression
    nopaneltrue
     indicates the value entered is an expression. Click this icon to treat the value to a literal string.
    Note: ["123", "234", "345"] looks like a literal value but it can be evaluated as an expression.

  • Image AddedOpen the expression in a larger editor.

Cache Extraction Filter

A cache extraction filter allows you to further filter the data retrieved by a pipe. These are not commonly used, but are sometimes helpful when either:

  • Optimising performance on a lookup pipe when for a set of records, the record you require from the lookup depends on non-key data, e.g. the date
  • When getting data from a pull pipe when the filter requires that you compare one value in each record with another; this is not possible within a standard filter.

For case 1, when using a lookup pipe, data retrieved is stored in a cache. See cache size for details. The cache extraction filter allows you, as you are processing a set of output records, to use different cached entries from the lookup for each of the records are you are processing. This is very fast compared to looking up from the source (i.e. going back to an external DB table or even another PhixFlow table) for each output record.

E.g. you want to look up the credit rating for a customer for a set of transactions - in the output, each transaction is represented by a single output record.  You create an indexed lookup pipe using CustNo as the key for the index. This means that for each new CustNo you encounter in the data, all the credit rating entries for that CustNo would be retrieved by the pipe and placed into the cache. The credit rating for each customer is fully historied, so you get a number of entries for each CustNo. To get the relevant lookup entry for each output report (each transaction), you need to compare the transaction date of the output record to the dates of credit rating entries in the cache. So to extract the relevant record, you include a cache extraction filter in the form:

Code Block
StartDate >= _out.TransDate && (EndDate <= _out.TransDate || EndDate == _NULL)

Cache extraction filters are entered free hand.

The attribute names referenced must exist in a table. This means that the each attribute must be one of:

  • an attribute in a source table, if you are reading from a table
  • if you are reading from an external database table, one of the fields returned by the database collector AND an attribute in the output table. This means to use an attribute with the source as a database collector, there must be an attribute of matching name in the output table
  • an attribute in the destination table, in which case you will refer to it using the format _out.AttributeName

Filter Examples

Filter on Current User

Sometimes when running analysis you want to select, from the source, only records belonging to the currently logged in user. To set a filter where, say, an attribute in the source Owner equals the current logged in user, add a condition to the filter like this:

Owner Equals _user.name fx

Enter a list of values for an "Is In" or "Is Not In" filter

If you want to based on a list of values, use the Is in or Is not in comparators, then type the list of values into the comparison field as a comma separated list like this:

Country Is in England, France, Germany ABC

In this case you must NOT click the ABC icon to convert the value to an fx, because this will indicate that the value is a formula; it must be left as a literal value. If you do click the ABC icon, then the value must be entered like this:

Country Is in ["England","France","Germany"] fx

Sort/Group or Order/Index

For lookup pipes this section is called Order/Index.

Use this section to group and sort data as it comes through the pipe. This section has:

  • a toolbar with standard buttons
  • a grid that lists the attributes that you want to sort or use to group; see Using the Sort/Group Grid, below
  • the following below the grid, when Basic Settings → Type is Look-up:
FieldDescription
Maximum Number of records per Group

Available when Type is Look-up, except where the pipe is connected to a calculate table.

Enter an upper limit for grouped records.

When collating the input records into groups, PhixFlow uses the specified sort order. When it has added the maximum number of records, any more records for the group are ignored.

This can be useful if you want the most recent record for an attribute that has many records. 

  1. Tick the Group check box for the attribute you want to use for grouping.
  2. On an appropriate date attribute, apply a (Z-A) sort order.
  3. Set the Maximum Number of Records to 1. 
Index Type

Available when Type is Look-up.

Look-up pipes can be configured for fast "indexed" access to cached data collected from external tables, files or from other tables. Indexed access is controlled through configuring a pipe with an index and setting index expressions on grouping attributes. If the Type field on the Pipe is set to 'Look-up' then the field "Index Type" becomes available. This can have the value "None" meaning that there are no index keys or "Exact Match", "Best Match" or "Near Match" as described below:

  • Exact Match: The pipe retrieves data from its cache based on an exact match look-up with the values provided after evaluating the index expressions on the "Group By" attributes.

  • Best Match: The pipe retrieves data from its cache based on a "Best Match" look-up after evaluating the index expressions on the "Group By" attributes.
    Note: The last Group By Attribute with a key expression is used for the best match lookup. The index keys on any Group By attributes with a lower sequence number are used as an initial "Exact Match" to find the set of data on which to do the "Best Match". The "Best Match" is defined as the longest key value which matches the evaluated index expression.

  • Near Match: The pipe will retrieve data from its cache based on a "Near Match" look-up after evaluating the index expressions on the "Group By" attributes.
    Note: the last Group By Attribute with a key expression is used for the near match lookup. The index keys on any Group By attributes with a lower sequence number are used as an initial "Exact Match" to find the set of data on which to do the "Best Match". When "Near Match" is selected, an additional field appears where you can enter an expression which should evaluate to a number representing the allowed number of edits (e.g. deletions, insertions, substitutions and transpositions) which can be made when comparing the result of the index expression to the index key in order to achieve a match. For example if the index key is "Smyhte" and the result of the index expression is "Smith" this would still be a match providing that the allowed number of edits is 3 or more (i.e. substitute the 'i' for a 'y', transpose the 't' and the 'h' and then insert an 'e' at the end).

Using the Sort/Group Grid 
Anchor
using
using

To add an attribute to the list:

  • click 
    Insert excerpt
    _attributes_show
    _attributes_show
    nopaneltrue
     to open the list of attributes in the input table
  • drag an attribute into the grid.

To remove an attribute, click 

Insert excerpt
_delete
_delete
nopaneltrue
 in the toolbar.

To set the sort or group properties for an attribute, double-click its name in the grid. If you want to create a new attribute that is not present in the input table, in the section toolbar, click

Insert excerpt
_new
_new
nopaneltrue
. PhixFlow opens the attribute's sort properties:

FieldDescription
Attribute

For input attributes, PhixFlow displays the attribute name. (Read-only)

For a new attribute, enter a name.

OrderEnter the number for the order the attribute appears in the grid and the order in which it is processed. Other attributes are renumbered.
Direction

Select the sort order

  • (A-Z) to sort data records in ascending order, e.g. A to Z, 1 to 9, earliest to latest date.
  • (Z-A) to sort data records in descending alpha-numeric order, e.g. Z to A, 9 to 1, latest to earliest date.
Group

Insert excerpt
_check_box_untick
_check_box_untick
nopaneltrue
 by default, data is not grouped.

Insert excerpt
_check_box_tick
_check_box_tick
nopaneltrue
 to group data records by the value in this attribute.

If this attribute is part of the candidate key set, you must tick the Group check box. Otherwise, the attributes will be used only to sort the data in the candidate set.

Index Expression

This field is available for lookup pipes with an Index Type option selected.

Look-up pipes can be configured for fast "indexed" access to cached data. This data is collected from external tables, files or from other tables. Indexed access is controlled through configuring a pipe with an index and setting index expressions on "Group By" attributes here.

Audit Summary

This option is on the

Insert excerpt
_property_tabs
_property_tabs
nameaudit
nopaneltrue
; see Common Properties.



Tip

In some cases, you may have a pipe connected to a database collector, which pulls data from an external database table. In these cases, the fields in the database must have matching attribute names in the output table. You can refer to it using the format _out.AttributeName

Aggregate Attributes 
Anchor
aggregate
aggregate

Use this section to define the properties of data that you want to combine as it comes through the pipe.

Note

You cannot aggregate data from attributes if the pipe's input is from:

If you need to aggregate data from a database collector, you can use an SQL query. 

This section has:

  • a toolbar with standard buttons
  • a grid that lists the attributes that you want to aggregate.

To add an attribute to the list:

  • click 
    Insert excerpt
    _attributes_show
    _attributes_show
    nopaneltrue
     to open the list of attributes in the input table
  • drag an attribute into the grid.

To remove an attribute, click 

Insert excerpt
_delete
_delete
nopaneltrue
 in the toolbar.

To set the properties for an aggregate attribute, double-click its name in the grid. PhixFlow opens the attribute's sort properties:

FieldDescription
A Function

Select an function to combine records.

  • Average
  • Count
  • Distinct
  • Maximum
  • Minimum
  • Sum

See Aggregate Function for details. Make sure the function matches the data in the attribute. For example, you cannot Sum text.

Attribute

Select the attribute aggregated from the list of attributes in the input table.

PhixFlow does not use the value in this field if the Aggregate Function is Count.

NameEnter the name for the aggregated attribute. This can be the same as the original attribute.
OrderThe order of the aggregate attribute in the output table.

Advanced

The following options are available when the pipe Type is Push or PullAllow Incomplete Stream Sets is also available when the pipe Type is Lookup.

FieldDescription
Data Expected

Insert excerpt
_check_box_untick
_check_box_untick
nopaneltrue
 means the pipe may receive no data from its input during an analysis run.

Insert excerpt
_check_box_tick
_check_box_tick
nopaneltrue
  means PhixFlow reports an error if the pipe receives no data from the input datasource, collector or table during an analysis run.

Allow Incomplete Streamsets

This field is available when the input is not Transactional. Where the input table is transactional PhixFlow will behave as though the box is ticked.

Insert excerpt
_check_box_untick
_check_box_untick
nopaneltrue
 to complete a recordset before passing the data to the output table. During the analysis run, PhixFlow pulls data into the input table until the recordset is complete. If it cannot complete the recordset, PhixFlow reports an error message.

PhixFlow cannot complete a recordset if:

  • either the input table is set to be  
    Insert excerpt
    _static
    _static
    nopaneltrue
  • or all of the pipes reading from the table are static.

Insert excerpt
_check_box_tick
_check_box_tick
nopaneltrue
  PhixFlow ignores incomplete recordsets in a static input table and does not report an error.

Tip

You must tick this check box on all the pipes that will read from a static (or effectively static) input table in the analysis run. PhixFlow will report an error if there is any pipe trying to complete the table set during the analysis run.

Pipes that are not used in the analysis run do not try to complete a recordset, so will not report an error. (Unused pipes can occur if they lead to tables on branches of the model that are not being run.)

Buffer SizeEnter a number for the buffer size used to perform the table calculation. If a large amount of data is being processed, then setting a large buffer size will give better performance.

Max Records To Read


Enter a number for the maximum number of records that should be read down this pipe. The pipe may read more than this number of records if it is configured to carry out multiple reads simultaneously. For example:

  • a pipe can be  connected to a File Collector that reads multiple files simultaneously
  • the pipe strategy is Directed with multiple workers.
Strategy

Select an option to specify how this pipe should be implemented. See the section on Directed Merge Strategy

Max Workers

This field is available when Strategy is Directed

Enter the maximum number of concurrent worker tasks. When no value is specified, this defaults to 1.

Worker Size

This field is available when Strategy is Directed

Enter the number of key values to read for a single worker task, which runs a single select statement.

When no value is specified, this defaults to 1000. This is the maximum value that can be used when reading from an Oracle database.

Log Traffic

When system logging → Pipe Logging is ticked, PhixFlow always logs the number of records returned by this pipe, whatever is set here; see System Logging Configuration.

Insert excerpt
_log_traffic1
_log_traffic1
nopaneltrue

Custom Data to Read 
Anchor
custom
custom

The following properties are available in Basic Settings when you set Data To Read to Custom.

FieldDescription
Only collect from same run

Every time the analysis engine runs, all of the recordsets that are created by all of the tables affected by that analysis run are given the same Run ID.

Insert excerpt
_check_box_untick
_check_box_untick
nopaneltrue
 so the pipe can collect recordsets with different Run IDs.

Insert excerpt
_check_box_tick
_check_box_tick
nopaneltrue
 so that the pipe will only collect recordsets from the input table that have the same Run ID as the recordset currently being created by the output table. You should only tick this check box if both the input and output tabkes have Period set to Transactional.

From OffsetEnter the offset applied to the start of the collection period, relative to the period in the output table that requires populating.
To OffsetEnter the offset applied to the end of the collection period, relative to the period in the output table that requires populating.
Max Stream Sets

Enter the number of recordsets to be retrieved from the input table. 

For a push pipe with positive offsets. enter the maximum number of recordsets that can be created i.e. the maximum number of cycles this pipe can initiate.

Historied

Insert excerpt
_check_box_untick
_check_box_untick
nopaneltrue
  so that all data will be collected from the input table, regardless of period. In this case, any From Offset or To Offset values determine whether the required data periods in the input table exist before the table calculation can be carried out.

Insert excerpt
_check_box_tick
_check_box_tick
nopaneltrue
 so that the pipe will collect data from the input table by period.

For example, if:

  • the from and to offsets are both 0.0
  • and the output table requires table generation for the period 17/10/07 - 18/10/07

the pipe reads data from the input table for the period 17/10/07 - 18/10/07. 

Insert excerpt
_description
_description
nopaneltrue

Insert excerpt
_audit
_audit
nopaneltrue

Live Search
spaceKey@self
additionalnone
placeholderSearch all help pages
typepage

Panel
borderColor#00374F
titleColorwhite
titleBGColor#00374F
borderStylesolid
titleSections on this page

Table of Contents
maxLevel3
indent12px
stylenone


Learn More

For links to all pages in this topic, see Analysis Models for Batch Processing Data.

Insert excerpt
_terms_changing
_terms_changing
nopaneltrue