...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
Insert excerpt | ||||||||
---|---|---|---|---|---|---|---|---|
|
...
...
...
...
...
...
...
...
...
...
The following fields are configured for Streams:
...
...
...
- Transactional: allows multiple users to run independent analysis tasks at the same time.
- Daily: generate or collect data every day.
- Monthly: generate or collect data every month.
- Variable: generate or collect data since the more recent run of the stream to the current date.
When the period is first set to Transactional the UID stream attribute will be created if it does not already exist.
...
The type of function used to generate this stream. Possible types are:
...
You can select a "loop" pipe - that is, a pipe linking the stream back into itself - in this field. If you do, new records will be compared to existing records, using the selected loop pipe, and if a repeated record is found, the old one will be marked as 'superseded'.
...
Only applies when period is Transactional. If ticked, updates and deletes initiated by stream actions (not those carried out by analysis runs) will automatically mark the existing record as superceded and create a new stream set. The new versions of the updated records will be placed in the new stream set. Inserts will simply create a new stream set, and add the inserted record into that stream set.
When Audit Manual Changes is first set, the attributes UpdateAction
, UpdatedByName
, UpdatedByID
and UpdatedTime
will be created if they do not already exist. If you do not require these attributes, delete them.
UpdateAction
must be set to the type of action, such as INSERT, UPDATE or DELETE. The other attributes will be populated if they exist on the stream:
- UpdatedByName - the name of the user that performed the update,
- UpdatedByID - the internal id of the user that performed the update
- UpdatedTime - the date and time the update was made
...
A list of the stream attributes in the stream. The toolbar on this section has the options: Show list of File Collectors and Shows the list of Streams.
To edit the properties for an attribute, double-click the attribute name. To edit only the expression:
- Right-click an attribute name to display the context menu.
- Select Edit the expression field.
- PhixFlow opens a simple text editor box
- Make changes to the attributes expression.
- Click to save your changes.
...
...
The order of the attributes in the stream. This is important because the stream attribute expressions are evaluated in this order. If the results of an attribute expression, or a $ variable calculated during its calculation, are required in the expression of a second attribute - the second attribute must come after the first in the attribute list.
...
- When 1 or more output pipes from the stream uses the field to 'Filter' the stream.
- When 1 or more output pipes from the stream uses the field in a 'Sort/Group' by action.
...
If ticked, new filter conditions on this field are case-insensitive by default. The filter window → Ignore Case check box inherits this setting; see Filters on Data Views. For case-insensitive filters, there is no difference if the attribute is also indexed.
...
do ( $aRange = [], addElement($aRange, rng.RangeFrom), addElement($aRange, rng.RangeTo), $bRange = [], addElement($bRange, $aRange), $bRange )
...
ifNull(in.ASSET, [1,10,12] , // else do [5,7] )
...
do( lookup(lkin, $num = in.BNumber), lkin )
will return a list of records which match the lookup on the lkin pipe. In this case the required data can be extracted from the Output Multiplier using the following expression :
do ( $values = _type, $values.account_num )
If the output multiplier expression evaluates to _NULL, an empty list of values or an empty list of records then a single output record will be produced with _type set to _NULL, _NULL or an empty record respectively.
...
A list of the filters on the stream. See Filters on Data Views.
Any filter defined on the stream may appear in the dropdown list of filters accessible from the header of each stream view. To make a filter available in a view, the filter must be added to the list of filters for that view. See Stream View for details.
All filters defined on this tab will be available on the system generated Default View for this Stream.
...
A list of pipes into the stream.
Note |
---|
It is possible for this list to include pipes that have no input. This occurs if the source stream has been deleted, or if a model has been moved to a different PhixFlow instance (export/import), leaving behind a referenced stream. Any pipes with no input are highlighted in yellow. To resolve pipes with no input you can:
|
...
The number of days data to keep in the stream.
When an archive task runs for a stream, all stream data is deleted if it is at least Keep for X Days old or if it is older than the Keep for Y Stream Sets most recent valid stream sets.
If both Keep for X Days and Keep for Y Stream Sets are set, stream data will be deleted only if it meets both conditions. If neither are set, stream data is kept indefinitely.
If Save Archive to File is ticked, deleted items are first saved to archive files.
The age of data in a stream set is its 'to' date relative to the 'to' date of the newest valid stream set in the stream.
See here for how to set up and schedule an Archive Task.
Please see the section below on Archiving Examples to see how this value can be used within Archiving strategies.
...
The number of stream sets data to keep in the stream.
See Keep for X Days for the main description of archiving.
...
The number of days for which to keep superseded data in the stream.
If Track Superseded Data is ticked, then this field will become visible/enabled.
In a stream where the superseded date is tracked, the stream data will contain a mixture of superseded records and "active" records - that is, records that have not been superseded.
When an archive task runs for a stream, records that were marked as superseded more than Keep Superseded for X Days days or more than Keep Superseded for Y Stream Sets stream sets ago are deleted.
If both Keep Superseded for X Days and Keep Superseded for Y Stream Sets are set, superseded records will be deleted only if they meet both conditions. If neither are set, superseded records are not deleted.
This means, for example, that if you have set Keep Superseded for X Days to 4, you will be able to roll back 3 days, making the 4th day the latest valid day.
If Save Archive to File is ticked, deleted items are first saved to archive files.
Please see the section below on Archiving Superceded Examples to see how this value can be used within Archiving strategies.
...
The number of stream sets for which to keep superseded data in the stream.
If Track Superseded Data is ticked, then this field will become visible/enabled.
See Keep Superseded for X Days for the main description of archiving superseded records.
...
If checked, this specifies that all users can view this data by default (provided they have the basic privilege to view streams).
If this field is not checked, then access to the underlying data is controlled by dropping user groups onto the stream's "User Group" tab.
Note that the default setting for this field on streams is controlled by the system parameter allowAccessToDataByDefault.
...
- All: indexes on the Stream are optimised for selecting from all stream sets (non-historied reads).
- Latest: indexes on the Stream are optimised for selecting from the latest stream set (i.e. for historied reads).
- Superseded: indexes on the Stream are optimised for self-updating streams which have a moslty superseded records.
- None: no indexes are created on the Stream.
...
- Database: Store the data in a regular table within the PhixFlow database. This is the most common option
- Database (Partitioned): Store the data in a partitioned table within the PhixFlow database. This option provides improved performance for rollback and archiving of very large Streamsets. The option is only available if "partitioning" is available within your database installation.
- In Memory: Data for the Stream will not be written to the database. This option can be used (for example) when you want to aggregate large amounts of unsorted data which can then be written to a stored Stream.
...
This field only appears if the Period is set to Transactional. If ticked, it ensures that only a single stream set can be generated at a time even if the stream receives several concurrent requests to generate data.
This can be useful where you want to make sure that two analysis runs don't attempt to update the same records at the same time e.g. as a result of two people selecting the same records in a view and then hitting the same action button at the same time to process those records.
...
If this flag is ticked then whenever the analysis engine needs to generate data for this stream it will first wait for all running tasks to complete before it starts.
Any additional analysis tasks submitted while this stream is waiting to start, or while it is generating data, will wait until this stream has completed its analysis before they start.
...
Description
...
The table below assumes the stream to be archived currently contains 8 stream sets. Two from the current day and one from each of the previous 6 days.
In the table below the value null refers to the fact that no value has been entered into this field.
Note that archiving will always retain the maximum active stream sets in the data such that no conflicting stream sets will be archived.
...
In the case where only the Keep Superseded for X Days and Keep Superseded for Y StreamSets fields are populated, the same logic in the table above will apply to the superseded records. Note that again archiving will always retain the maximum superseded stream sets in the data such that no conflicting stream sets will be archived.
In the cases where a mixture of the full archive fields Keep for X Days, Keep for Y StreamSets' and the superseded archive fields Keep Superseded for X Days, Keep Superseded for Y StreamSets are populated, then the full archive values will be first applied and the resultant stream item records will be archived and deleted. Only then will the remaining stream sets use the Keep Superseded ... values to apply a further condition to archive and delete any remaining non qualifying superseded records.
Attribute Types
...
A Bigstring is used for strings over 4000 characters long. Bigstring is a different data type to string and has some restrictions on filtering, sorting and aggregation.
For instances using an Oracle database, Bigstrings cannot be sorted or aggregated. On Oracle Bigstrings may only be filtered with the conditions (not) contains, is (not) null, (not) starts with or (not) ends with.
The maximum Bigstring size can be configured in System Configuration → System Tuning → Maximum Bigstring Size.
...
Decimal is a non-integer number which is stored to a set level of precision. Decimals have a
- significant figures property, which is the number of digits stored,
- a decimal places property, which is the number of digits after the decimal place.
The maximum number of integer digits is therefore significant figures minus decimal places. If the number of integer digits is greater than the limit, analysis will fail. Decimal places will be stored to the scale specified.
...