Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Panel
bgColor#e6f0ff
titleBGColor#99c2ff

The name can have no special characters except _, the underscore character '_' and it has to start with a letter and cannot be an Attribute Function name.

...

Panel
bgColor#e6f0ff
titleBGColor#99c2ff

If the Only collect from same run flag is ticked, the pipe will only collect data from inputs from the same analysis run that is generating the output data. This is only used when building a transactional model.

...

Filters, sorting and grouping, and aggregating are configured through their own tabs on the form:

...

Anchor
advancedPipeConfiguration
advancedPipeConfiguration
Advanced Pipe Configuration

Pipe Form Reference

Details tab

The following fields are configured on the Details tab:

FieldDescription
NameThe name of the pipe.
InputName of the input stream providing data to the pipe. Non-editable field.
OutputName of the output stream, that is, the stream at the end (the arrow end) of the pipe.
TypeDetermine whether the pipe is a pull, push or lookup.
Data To Read

This field is used to determine which Streamsets to read from the input Stream.

Static

Normally when a Pipe requests data from a non-static input Stream then that Stream will first attempt to bring itself up to date, generating new Streamsets as necessary, before supplying the data requests. However, if this field is ticked, the input Stream will not attempt to do this.

Enabled

If this flag is not ticked then it is an indication to PhixFlow that the Stream is not ready to be used during any analysis runs and should be therefore be ignored.

The following fields are configured available on the Advanced tabDetails tab if you set Date To Read = Custom:

If ticked, when multiple Streams are being merged then there must be an input record from this Pipe for an output record to be generated by the output Stream.

If this is a push pipe with positive offsets and
FieldDescription

Mandatory

Only collect from same runEvery time the analysis engine runs, all of the stream sets that are created by all of the streams affected by that analysis run are given the same Run ID. If this flag is ticked then the
notification to create another stream set will only be pushed along the pipe if the last stream set created contains at least one record.MultiplierWhen processing data, a Stream first constructs CandidateSets ready for processing from its (non-Multiplier) input Pipes. For each Multiplier Pipe, the Stream then multiplies each CandidateSet by creating a copy of the original CandidateSet for each row returned from the Multiplier Pipe. Note that each resulting CandidateSet contains the original CandidateSet plus one row from the Multiplier Pipe.Execution Strategy

The Execution Strategy determines how this pipe should be implemented. See the section on Directed Merge Strategy

Max Workers

The maximum number of concurrent worker tasks.

If blank, this defaults to 1.

Worker Size

The number of key values to read for a single worker task (which runs a single select statement).

If blank, this defaults to 1000. This is the maximum value that can be used when reading from an Oracle database.

Cache Size

The cache is used when carrying out lookups from streams or database collectors. When doing a lookup, there are two common scenarios:

  1. The pipe does a single lookup onto a stream or database table to get a large number of records in one go (e.g. 10,000 records)
  2. The pipe does many lookups, getting a small number of records for each lookup (e.g. 10 records at a time).

In case 2, the results returned are typically based on a key value, e.g. an account number. This will be used in the filter of the pipe, if you are reading from a stream, or in the query, if you are reading from a database collector. For example, the query in a database collector will include the condition:

Code Block
WHERE AccountNumber = _out.AccountNum

For efficiency, the records are cached (stored temporarily in memory) so that if the same set of records need to be looked up again they are readily available without going back to the database.

This field allows you to set a limit on the size of the cache. Setting a limit is important because if you do not, the cache can become very large and consume a lot of memory, which can lead to a slow down in both your tasks and those of other users of PhixFlow.

To set the cache size, try to estimate the largest number of records that the lookup pipe will return on a single read.

If you do not set a limit, it will default to the system-wide default, specified in the Maximum Pipe Cache Size in the System Tuning tab of the System Configuration.

Panel
bgColor#e6f0ff
titleBGColor#99c2ff
titleCache warnings and errors

If a single read brings back over 90% of the specified cache size, a warning message will be logged to the console.

If a single read brings back 100% or more of the cache size, a second warning message will be generated. If the Enforce Cache Size limit flag is ticked in System Configuration, instead of a warning an error will be generated, and the analysis run will stop completely.

Code Block
titleError Message: Cache Size Limit Exceeded
The Pipe "stream_name.lookup_pipe_name" cache is 100% full (the cache size is 10).
Panel
bgColor#e6f0ff
titleBGColor#99c2ff
titleTechnical breakout
Every time the lookup pipe is referenced, PhixFlow calculates the values of all of the variable elements of the query or pipe filter, and checks if it already has a set of data in the cache retrieved using this set of variable values. If so the data is immediately returned from the cache. Otherwise, a new set of data is read from the stream of collector. If adding the new records to the cache would cause it to exceed the maximum cache size, previously cached results are removed to make enough room for the new results.
Buffer SizeThe buffer size used to perform the stream calculation. If a large amount of data is being processed, then setting a large buffer size will give better performance.Allow Incomplete Stream SetsNormally, when a pipe tries to read from an input stream that contains an incomplete stream set, PhixFlow will attempt to complete the stream set before passing data down the pipe. However if the stream is static (i.e. the stream has its 'static' flag ticked) or is effectively static (i.e. all of the pipes reading from it in this analysis run are static) then, instead of completing the stream set, an error message is produced indicating that you cannot read from this stream because it contains an incomplete stream set.
If you do not want this error message to be produced when reading from static (or effectively static) streams, but would instead prefer PhixFlow to ignore the incomplete stream sets, then you must tick this box on all pipes that will read from the input stream in this analysis run. If there are multiple pipes that read from the input stream during this analysis run and even one of the pipes does not have this box ticked then you will not be allowed to read from the stream and the error message will be produced.
Pipes which are not used in the current analysis run (for example where they lead to streams on branches of the model which are not run by the current task plan) have no effect on whether or not the error message is produced.Data ExpectedThis field is available when the Pipe Type is Push or Pull. This flag allows the user to specify that the pipe is expecting to receive data. If ticked but no data is received this is treated as an error.Only collect from same runEvery time the analysis engine runs all of the Streamsets that are created by all of the Streams affected by that analysis run are given the same Run ID. If this flag is ticked then the pipe will only collect Streamsets from the input Stream that have the same Run ID as the Streamset currently being created by the outpu Stream.Max Stream SetsIn almost all cases this specifies the number of stream sets to be retrieved from the input stream. However, if this is a push pipe with positive offsets this value indicates the maximum number of streamsets that can be created i.e. the maximum number of cycles this pipe can initiate.HistoriedIf ticked, the pipe will collect data from the input stream by period. So if the from and to date offsets are both 0.0, and the output stream requires stream generation for the period 17/10/07 - 18/10/07, data will be collected from the input stream for the period 17/10/07 - 18/10/07. If not ticked, all data will be collected from the input stream, regardless of period. In this case, the offsets are still used to determine whether the required data periods in the input stream exist before the stream calculation can be carried out.From Date Offset

The offset applied to the start of the collection period, relative to the period in the output stream that requires populating.

To Date Offset

The offset applied to the end of the collection period, relative to the period in the output stream that requires populating.

Read Future DataIf you are running a Transactional Stream then it is possible that while your analysis run is taking place, other analysis runs which started after yours may have managed to complete before yours thereby generating additional Streamsets on the input Stream. These additional Streamsets will then have a future data relative to the date of the Streamset you are generating. By default PhixFlow will ignore input Streamsets that have a date in the future relative to the Streamset being generated.
However, for transactional streams it is possible to tell the pipe to include future Streamsets by ticking this box.
This field is only available if the input Stream is Transactional and the following fields are empty:
  • Only collect from the same run
  • Max Stream Sets (this may also be set to zero)
  • Historied
    pipe will only collect stream sets from the input stream that have the same Run ID as the stream set currently being created by the output stream. You should only use this flag is both the input and output streams are transactional.
    Max Stream SetsIn almost all cases this specifies the number of stream sets to be retrieved from the input stream. However, if this is a push pipe with positive offsets this value indicates the maximum number of stream sets that can be created i.e. the maximum number of cycles this pipe can initiate.
    HistoriedIf ticked, the pipe will collect data from the input stream by period. So if the from and to date offsets are both 0.0, and the output stream requires stream generation for the period 17/10/07 - 18/10/07, data will be collected from the input stream for the period 17/10/07 - 18/10/07. If not ticked, all data will be collected from the input stream, regardless of period. In this case, the offsets are still used to determine whether the required data periods in the input stream exist before the stream calculation can be carried out.
    From Date OffsetThe offset applied to the start of the collection period, relative to the period in the output stream that requires populating.
    To Date OffsetThe offset applied to the end of the collection period, relative to the period in the output stream that requires populating.
    Read Future Data

    If you are running a transactional stream then it is possible that while your analysis run is taking place, other analysis runs which started after yours may have managed to complete before yours, generating additional stream sets on the input stream. These additional stream sets will then have a future data relative to the date of the stream set you are generating. By default PhixFlow will ignore input stream sets that have a date in the future relative to the stream set being generated.

    However, for transactional streams it is possible to tell the pipe to include future stream sets by ticking this box.

    This field is only available if the input stream is transactional and:

    • Only collect from same run is not ticked
    • Max Stream Sets is blank or zero
    • Historied is not ticked

    Advanced tab

    The following fields are configured on the Advanced tab:

    FieldDescription

    Mandatory

    If ticked, when multiple Streams are being merged then there must be an input record from this Pipe for an output record to be generated by the output Stream.

    If this is a push pipe with positive offsets and this flag is ticked then the notification to create another stream set will only be pushed along the pipe if the last stream set created contains at least one record.

    MultiplierWhen processing data, a Stream first constructs CandidateSets ready for processing from its (non-Multiplier) input Pipes. For each Multiplier Pipe, the Stream then multiplies each CandidateSet by creating a copy of the original CandidateSet for each row returned from the Multiplier Pipe. Note that each resulting CandidateSet contains the original CandidateSet plus one row from the Multiplier Pipe.
    Execution Strategy

    The Execution Strategy determines how this pipe should be implemented. See the section on Directed Merge Strategy

    Max Workers

    The maximum number of concurrent worker tasks.

    If blank, this defaults to 1.

    Worker Size

    The number of key values to read for a single worker task (which runs a single select statement).

    If blank, this defaults to 1000. This is the maximum value that can be used when reading from an Oracle database.

    Cache Size

    The cache is used when carrying out lookups from streams or database collectors. When doing a lookup, there are two common scenarios:

    1. The pipe does a single lookup onto a stream or database table to get a large number of records in one go (e.g. 10,000 records)
    2. The pipe does many lookups, getting a small number of records for each lookup (e.g. 10 records at a time).

    In case 2, the results returned are typically based on a key value, e.g. an account number. This will be used in the filter of the pipe, if you are reading from a stream, or in the query, if you are reading from a database collector. For example, the query in a database collector will include the condition:

    Code Block
    WHERE AccountNumber = _out.AccountNum

    For efficiency, the records are cached (stored temporarily in memory) so that if the same set of records need to be looked up again they are readily available without going back to the database.

    This field allows you to set a limit on the size of the cache. Setting a limit is important because if you do not, the cache can become very large and consume a lot of memory, which can lead to a slow down in both your tasks and those of other users of PhixFlow.

    To set the cache size, try to estimate the largest number of records that the lookup pipe will return on a single read.

    If you do not set a limit, it will default to the system-wide default, specified in the Maximum Pipe Cache Size in the System Tuning tab of the System Configuration.

    Panel
    bgColor#e6f0ff
    titleBGColor#99c2ff
    titleCache warnings and errors

    If a single read brings back over 90% of the specified cache size, a warning message will be logged to the console.

    If a single read brings back 100% or more of the cache size, a second warning message will be generated. If the Enforce Cache Size limit flag is ticked in System Configuration, instead of a warning an error will be generated, and the analysis run will stop completely.

    Code Block
    titleError Message: Cache Size Limit Exceeded
    The Pipe "stream_name.lookup_pipe_name" cache is 100% full (the cache size is 10).
    Panel
    bgColor#e6f0ff
    titleBGColor#99c2ff
    titleTechnical breakout
    Every time the lookup pipe is referenced, PhixFlow calculates the values of all of the variable elements of the query or pipe filter, and checks if it already has a set of data in the cache retrieved using this set of variable values. If so the data is immediately returned from the cache. Otherwise, a new set of data is read from the stream of collector. If adding the new records to the cache would cause it to exceed the maximum cache size, previously cached results are removed to make enough room for the new results.
    Buffer SizeThe buffer size used to perform the stream calculation. If a large amount of data is being processed, then setting a large buffer size will give better performance.
    Allow Incomplete Stream SetsNormally, when a pipe tries to read from an input stream that contains an incomplete stream set, PhixFlow will attempt to complete the stream set before passing data down the pipe. However if the stream is static (i.e. the stream has its 'static' flag ticked) or is effectively static (i.e. all of the pipes reading from it in this analysis run are static) then, instead of completing the stream set, an error message is produced indicating that you cannot read from this stream because it contains an incomplete stream set.

    If you do not want this error message to be produced when reading from static (or effectively static) streams, but would instead prefer PhixFlow to ignore the incomplete stream sets, then you must tick this box on all pipes that will read from the input stream in this analysis run. If there are multiple pipes that read from the input stream during this analysis run and even one of the pipes does not have this box ticked then you will not be allowed to read from the stream and the error message will be produced.

    Pipes which are not used in the current analysis run (for example where they lead to streams on branches of the model which are not run by the current task plan) have no effect on whether or not the error message is produced.
    Data ExpectedThis field is available when the Pipe Type is Push or Pull. This flag allows the user to specify that the pipe is expecting to receive data. If ticked but no data is received this is treated as an error.
    Pipe ViewThe pipe view is used to limit which fields are retrieved down the pipe and in what order, and in some circumstances how each field is to be formatted. You can select from any of the views that have been configured on the source stream. Please note that any sorting or filtering of records will have to be applied directly on the pipe, and will not be inherited from the pipe view.

    The pipe view is used in three contexts.

    During Look Ups

    Pipe views can also be used on lookup pipes to limit the fields that are returned by the lookup request. This is most useful in the scenario where the you want to read and cache data on a lookup pipe from a stream that has lots of attributes but where only a small number of attributes are actually required. You can simply create a new view on the source stream listing only the attributes needed, then specify it as the pipe view on the lookup pipe. Only those attributes specified on the view will then be loaded.

    Without a pipe view, the pipe will load and cache all of the attributes from the stream which may consume a significant amount of free memory if there are a large number of records.

    During File Export

    When sending data to a file exporter only those fields specified on the pipe view will be exported. If no pipe view is supplied then all fields will be exported.

    If the file exporter is configured to export to Excel or to HTML, and no Excel template is specified on the exporter, then the pipe view will be checked to see if an Excel template has been specified there. This template will then be used as part of the export. See the description of file exporters for further details. If an Excel template has been specified on the exporter then this will override any template specified on the pipe view.

    If the file exporter is configured to export to HTML and the pipe view is a chart view then the output will be a picture of the chart in PNG format.

    During Drill Down

    When drilling down from an alarm or stream item the pipe view is only used to determine which attributes from the source stream should be shown in the drill down display and in what order.

    If a view is not specified, then all attributes are shown.

    Pipe ExporterAn exporter can be selected from the set of Database Exporters configured for the input stream. This exporter can then be used from the Drill Down View. This feature is useful when PhixFlow is used to 'recommend' a set of updates. By configuring an alarm to be generated when a set of recommendations is made, the user can drill down through the alarm to see the list of recommendations and then hit the exporter icon to apply them.

    Any filters applied on the pipe will be applied when the data is pushed to the pipe exporter, so it is possible that not all of the data in the grid will be exported - some records may be rejected by the filter.
    Max Records To ReadThe maximum number of records that should be read down this pipe. The pipe may read more than this number of records if it is configured to carry out multiple reads simultaneously e.g. if it is connected to a File Collector which has been configured to read multiple files simultaneously or if this pipe strategy is "Directed" with multiple workers.

    ...