PhixFlow Help

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

Forms: File Collector

A File Collector describes the structure, content, naming patterns and location of files of data to be imported into PhixFlow.

Note that File Collectors can also be used to process files that reside inside compressed file archives such as zip files. Please see the section below on Compressed Files for further information.

Form: File Collector Details

The following fields are configured on the Details tab:

FieldDescription
NameThe name of the file collector.
EnabledTick when the configuration is complete and the file collector is ready for use.
Source TypeThis field can have any of the following values:
  • Specified Directory: This option will cause the file collector to use the Import File Location (specified in System Configuration on the System Directories tab) as the root input directory when looking for files to load.
  • Managed File: This option will cause the file collector to use the File Upload Directory (specified in System Configuration on the System Directories tab) as the root input directory when looking for files to load.
Number of Header LinesThe number of lines in the header of the file. These are ignored when reading the file. (This option is not available for Binary File, XML and HTML file types).
TagThis field is only available if the Source Type field is set to Managed File. When files are uploaded by PhixFlow they are placed into a directory whose full path is a combination of the root File Upload Directory (specified in System Configuration on the System Directories tab), the tag value specified here and the Input Directory specified below (hard coded to 'in' for Managed files).
For example, if the System Configuration File Upload Directory is set to C:\ManagedFiles and Tag is set to CVFiles then the File Collector will look within C:\ManagedFiles\CVFiles\in for files to process.
Allow Non-Scheduled CollectionIf this is turned on, then the collector will run as part of any ad-hoc Analysis Engine run which requires this data. If not, it will only run as part of a scheduled task under the Analysis Engine.
File TypeCan have values:
  • Comma Separated Values: fields are delimited by a comma, (or other character).
  • Fixed Length Records: fields have a fixed column width.
  • Binary File: Data is extracted from the file using a Binary File Grammar (in XML) specified in the File Format Description tab.
  • File Details Only: Only attribute details about the file itself will be available.
  • Excel Spreadsheet: Data is extracted from the an excel spreadsheet supporting a .xls or .xlsx extension.

    .xlsx type excel files containing in excess of 10,000 rows are not supported by PhixFlow and should not be imported using a File Collector.

  • XML File: Data is extracted from an XML file
  • HTML File: Data is extracted from an HTML file
Next Sequence

The next sequence number expected to be found within the name of the file being imported.

This field is only available if File Location Strategy = All Files in Folder.

FTP SiteThe FTP Site on which the import file is stored. If no site is specified then the file is assumed to be on the local machine. If a site is specified then all directory paths specified on this form should be the full path to the file since the base directory specified in system configuration is ignored (since the base directory is specific to the local machine).
Ignore Base DirectoryThis field is only available if Source Type = Specified Directory.
Normally the base directory, specified in the "System Directories" tab of the "System Configuration" screen, is prepended to all directories specified on this form. However, if this flag is ticked then this does not happen and the directories specified on this form alone are used as the full path specifications for the import file.
File Location StrategyCan have values:
    • All Files in Folder: read all files matching the pattern specified in File Pattern Expression.
    • Read Paths: read in file path names from a collector or stream.

      This input collector or stream must be attached to the file collector by a pipe. The attribute of the input stream or collector which contains the file path names is specified in the field: File Name Attribute.

      Each file path name is interpreted as a pathname relative to the Import Directory. A path name may be a simple file name, or it may have multiple levels of directory, including compressed files (which will be interpreted as directories). The directory separator must be '/' (forward-slash), and not '\' (back-slash), even on a Windows platform.

      E.g. 'abc.csv', 'dir1/dir2.zip/abc.csv'

    • Read Names: read in file locations from a collector or stream. This input collector or stream must be attached to the file collector by a pipe. The attribute of the input stream or collector which contains the file locations is specified in the field: File Name Attribute.
      Read Names is deprecated.
Input Directory Expression

If the Source Type is Specified Directory the result of evaluating the Input Directory Expression is appended to the directory specified in the System Configuration File Upload Location to give the input directory from which files are read.

If the Source Type is Managed file then this will contain a non editable value of "in"
This will be appended to the combined path of System Configuration File Upload Directory and Tag to give the input directory from which files are read.

If File Location Strategy = All Files in Folder PhixFlow will look in this directory to find files matching the pattern specified in File Pattern Expression.

If File Location Strategy = Read Names this is added to the start of the file location read from the file name attribute.

Note that because this is an expression field, if you supply a simple directory definition in plain text it must be surrounded by quotes. Also, directory separators must be / and not \, even if the file is being read from a Windows platform.

E.g. "C:/data/address/input/".

Directory Pattern Expression

This field is an expression used to identify valid sub-directories of the input directory. This expression must itself resolve to a "pattern matching" Regular Expression.

If a Directory Pattern Expression is provided then PhixFlow will not only check the Input Directory for files but will also check all sub-directories of the Input Directory. Each file found will then not only have its name checked against the File Pattern Expression but will also have the relative path from the Input Directory to the file (referred to as the sub-directory path) checked against the Directory Pattern Expression.

For example, suppose the Input Directory has the sub-directories: 'region1/teamA'; 'region1/teamB'; 'region2/teamA'. If you want all the files across all regions for teamA, but not teamB, then you could use the following Directory Pattern Expression to pick out just the files for teamA:

".*/teamA/"

Alternatively, if you wanted all the files for all teams in region 1 only, you could use the following Directory Pattern Expression:

"region1/.*"

Regular expression rules are used to perform this match rather than the sort of pattern matching rules you might be used to when listing files. For example:

  • To match any string of characters, you must use ".*" and not "*"
  • To match a "." you must use "\\." and not "." (which means any character)
  • You must use forward slashes "/" instead of backslashes "\" for directory separators

A number of internal variables are available in these expressions:

  • _fromDate: the start date of the period of the stream being processed.
  • _toDate: the end date of the period of the the stream being processed.

Note that there are also a number of predefined compressed file expressions that will always be checked to determine if a file within a valid sub directory is actually a compressed file. If so then this file will assumed to be a valid compressed file and hence will be recursed into as if it was a standard matching directory. Please see Compressed Files for a list of valid compressed file expressions.

Exclude Dir. Pattern Expr.

This field is an expression that can be used to exclude certain sub-directories found by the Directory Pattern Expression. As with the Directory Pattern Expression, this expression must itself resolve to a Regular Expression.

 

For example, suppose the Input Directory has the sub-directories: 'region1/teamA'; 'region1/teamB'; 'region2/teamA'. If you want all the files across all regions for teamA, but not teamB, then you could use the following Directory Pattern Expression to find all files:

".*"

combined with the following Exclude Dir. Pattern Expr to exclude those for teamB:

".*/teamB/"

Regular expression rules are used to perform this match rather than the sort of pattern matching rules you might be used to when listing files. For example:

  • To match any string of characters, you must use ".*" and not "*"
  • To match a "." you must use "\\." and not "." (which means any character)
  • You must use forward slashes "/" instead of backslashes "\" for directory separators

A number of internal variables are available in these expressions:

  • _fromDate: the start date of the period of the stream being processed.
  • _toDate: the end date of the period of the the stream being processed.
File Pattern ExpressionThis field is only available if File Location Strategy = All Files in Folder. An expression used to generate a list of files to be read. This expression must itself resolve to a Regular Expression, used to match files in the input directory. Note that regular expression rules are used to perform this match, not the shell replacement style rules used in many file systems. E.g. to match all files, you must use ".*" and not "*". A number of internal variables are available in these expressions:
  • _fromDate: the start date of the period of the stream being processed.
  • _toDate: the end date of the period of the the stream being processed.
  • %SEQ%: the current sequence number.

Examples:

"inputRecords.txt"

will read files called "inputRecords.txt" from the input directories.

".*"

will read all files in the input directories.

".*\\.txt"

will read all files in the input directories with the extension ".txt"

"teamA.*"

will read all files in the input directories starting with "teamA."

"record_" + toString(now(),"yyyy-MM-dd") + "\\.txt"

will read files in the input directories with the format "record_yyyy-MM-dd.txt", where yyyy-MM-dd is the current date. E.g. "record_2013-03-26.txt".

"("+listToString(_context.f,"|")+")"

will read files with name contained in the list of files uploaded by the Stream Action which caused the File Collector to run, but only if a Context Value called 'f' is set in the Action and its value expression is '_files'

Archive Directory ExpressionIf set, processed files will be written to this directory. This field is an expression that must resolve to a Regular Expression. Note that because this is an expression field, if you supply a simple directory definition in plain text it must be surrounded by quotes. Also, directory separators must be / and not \, even if the file is being moved to a directory on a Windows platform. E.g. "C:/data/address/archive/".
Error Directory ExpressionIf set, files that error during processing will be written to this directory. This field is an expression that must resolve to a Regular Expression. Note that because this is an expression field, if you supply a simple directory definition in plain text it must be surrounded by quotes. Also, directory separators must be / and not \, even if the file is being moved to a directory on a Windows platform. E.g. "C:/data/address/error/".

The following fields are configured on the Advanced tab:

FieldDescription
Minimum FilesSpecifies the minimum number of files that are expected to be found whenever the collector runs. If fewer files are found then this is treated as an error.
Maximum FilesSpecifies the maximum number of files that will be processed whenever the collector runs.
Max Records Per FileSpecifies the maximum number of recoreds that will be read from each file processed.
Errors Before RollbackThe maximum number of errors PhixFlow will permit during the processing of a file before. Once this number has been exceeded, PhixFlow will abandon the attempted file load.
Parallel ReadersThe number of files to process in parallel. If blank, this defaults to 1.

If the collector is configured to read files in sequence, this field is ignored and a single file reader is used.

Unreadable DirectoriesThe action to take on finding an unreadable directory when searching a directory hierarchy for files to import.
  • Error: unreadable directories will be reported, and if any are found, the file search will fail.
  • Warning: unreadable directories will be reported, but the file search will continue unaffected.
  • Ignore: unreadable directories will be silently ignored
Character SetThe character encoding to be used.
Select a value from the drop down list. If Other if selected, a new box opens and a new character set can be entered. Full list of available character sets can be found here (Canonical Names from both columns can be used).
Column SeparatorSelect a value from the drop down list. If Other is selected, a new box opens and a new column separator can be entered.
Separator CharacterThis field is only available if Column Separator = Other. Allows a custom column separator to be entered.
Quote StyleSelect a value from the drop down list. If Other if selected, a new box opens and a new quote character can be entered.
Quote CharacterThis field is only available if Quote Style = Other. Allows a custom quote style character to be entered.
Ignore Missing ColumnsThis field is only available if File Type = Comma Separated Values.

If this flag is set then PhixFlow will not throw an error if the record being read contains fewer columns than expected. If this flag is not set then an error will be reported if there are too few columns.

Ignore Extra ColumnsThis field is only available if File Type = Comma Separated Values.

If this flag is set then PhixFlow will not throw an error if the record being read contains more columns than expected. If this flag is not set then an error will be reported if there are too many columns.

Import Rows MatchingAn expression, that must resolve to a Regular Expression, can be specified in this field. PhixFlow will attempt to match each line in the file against the expression, and only those that match will be imported.
Replace Text MatchingBoth fields Replace Text Matching and With are expressions that must resolve to Regular Expressions. In each imported line, replace all occurrences of the text matched with Replace Text Matching with With.
WithSee description of Replace Text Matching above.
Excel Data ExpressionThis field is only available if File Type = Excel Spreadsheet.

This field should be populated according to the following syntax: "WorksheetName!TopLeftCell:BottomRightCell" e.g "DailyCallsSheet!A1:G100", or a list of such expressions, e.g ["DailyCallsSheet!A1:G100", "A1:B20", "Calls!A1:C100"]. We can also use phixflow expressions, such as: _worksheets. The _worksheets can be used to return the list of available worksheets on the current excel file. Any value that is not a valid phixflow expression (strings are valid phixflow expressions) will cause this file collector to fail.


The following examples show how to populate this field to select various excel worksheets and cell ranges.

  • All rows and columns in the default 1st worksheet:- Leave this field empty as this is the default behaviour
  • Specified columns only on the default 1st worksheet:- "A:C"
  • Specified cell range only on the default 1st worksheet:- "B1:G10"
  • Specified cell range on a specified worksheet:- "DailyCallsSheet!A2:F20"
  • List of specified cell ranges on multiple worksheets:- ["DailyCallsSheet!A2:F20", "Calls!A1:C400", "Accounts!A5:F50"]

 

Note that if a worksheet is specified, then the full cell range must also be specified. Hence it is not possible to select a 'worksheet only' or 'columns only for a specified worksheet'. e.g DailyCallsSheet or DailyCallsSheet!A:C are not supported.

Ignore Undefined ValuesThis checkbox is only available if File Type = Excel Spreadsheet.

This checkbox should be ticked if all unsupported excel values such as #N/A, #REF! #DIV/0 etc should be ignored and replaced with null values during processing. In this case a single warning message will be displayed to the user once processing has completed stating the number of unsupported cell values found during the processing and a detailed message about the first unsupported cell value.

If this checkbox is unticked then each unsupported excel value will be reported as an individual warning/error in the console and processing will be terminated if the maximum number of errors/warnings is exceeded.

XPath ExpressionThis field is only available if File Type = XML File or HTML File

This field should be populated according to valid XPath syntax. Please see XPath Examples for how to use XPath expressions and how the returned data can be used and evaluated in the corresponding stream attribute expressions.

The following fields are configured through the File Format Description tab on the form: Note : this tab is only available for Binary File Collectors.

FieldDescription
Stream Item Node

A list of target node names - that is, the names of nodes that will generate an output record. This field is an expression that must resolve to a single string, or a list of strings.

Example of a single target:

"DATA_RECORD"

Example of a list of targets:

["DATA_RECORD1","DATA_RECORD2","DATA_RECORD3"]

Validate File FormatValidate that the file matches the XML description.
File Format DescriptionA Binary File Grammar, in XML, describing the format of data in the file.

The following fields are configured through separate tabs on the form:

FieldDescription
File ColumnsA list of the File Attributes configured on this File Collector. Selecting an attribute by double clicking it brings up the details of that attribute in the File Collector Attribute Details form.
XML NamespacesXml Namespaces are used for providing uniquely named elements and attributes in an XML document. An XML instance may contain element or attribute names from more than one XML vocabulary. If each vocabulary is given a namespace, the ambiguity between identically named elements or attributes can be resolved. Note : this tab is only available for XML File Collectors.
DescriptionDescription of the file collector.

File Collector Attributes

A number of attributes are available on all types of File Collector:

AttributeDescription
_fileNameThe name of the file.
_lineNumberThe line number of the record within the file it was read from.

The _lineNumber attribute is not available for File Collectors of Type File Details Only

_modifiedDateThe datetime of when the file was last modified.

The last modified time of a single file residing within a .gz or a .tgz container can not be determined by phixflow, instead the datetime of when the corresponding gz/tgz container was created will be returned.

_pathThe full path to the file which is the result of concatenating the _rootDirectory and the _subDirectory values.
_rootDirectoryThe root base directory (if specified) concatenated with the value evaluated in the Collectors 'Input Directory Expr' field.
_sizeThe size of the file in bytes.

The size of a single file residing within a .gz or a .tgz container can not be determined by phixflow, instead a size of -1 will be returned.

_subDirectoryThe sub directory relative to the _rootDirectory in which the corresponding file resides.
_worksheetThe name of the current worksheet of the excel file. The _worksheet is not available if the file type is not 'excel'.
_range

The excel range expression that was used. The _range attribute is not available if the file type is not 'excel'.

Compressed Files

In the majority of cases a compressed file will just contain a single file.
e.g A simple zip file called DailyCalls.zip would contain a single file named DailyCalls.csv.

However, some compressed files contain directories, sub-directories, files and further compressed files. In such cases the compressed file can be thought of as a directory, and further, any compressed files within the compressed file can be thought of as directories in the directory structure inside the compressed file. Therefore, Compressed Files are treated like normal directories and obey the same rules when matching the 'Directory Pattern Expression' and the 'Exclude Dir Pattern Expression'. Similarly all directories, sub-directoires, and compressed files within a compressed file will also be treated as normal directories when matching the 'Directory Pattern Expression' and the 'Exclude Dir Pattern Expression'. Files contained anywhere inside the directory structure in the compressed file (including files contained in a compressed file within the compressed file) are treated as normal files when matching the 'File Pattern Expression'.

Supported Compressed Files

Compression TypeFiles ending with extensionDescription
zip".zip"A zip archive created by either windows programs such as winzip etc or unix commands such as zip.

e.g zip dailyCalls.zip dailyCalls_20120918.csv

would result in a compressed zip file called dailyCalls.zip being created which would include a single file called dailyCalls_20120918.csv
tar".tar"A tar archive created by the unix tar command.

e.g tar -cvf dailyCalls.tar dailyCalls_20120918.csv

would result in a tar file called dailyCalls.tar being created which would include a single file called dailyCalls_20120918.csv
gz".gz"A gz archive created by the unix gzip command

e.g gzip dailyCalls_20120918.csv

would result in a compressed gz file called dailyCalls_20120918.csv.gz being created which would include a single file called dailyCalls_20120918.csv. Note that the unix gzip command always assumes the named container has a single file of the same name contained within.
tgz".tgz"A joint tarred gz archive created by combining both the tar and gzip commands into a single command.

e.g tar -cvzf dailyCalls.tgz dailyCalls_20120918.csv

would result in a compressed tgz file called dailyCalls.tgz being created which would include a single file called dailyCalls_20120918.csv

Note that there is currently no support for rar type compressions.

File Compression Examples

The following table shows how each compressed file found will be treated given the following values for 'File Pattern Expression', 'Directory Pattern Expression' and the 'Exclude Dir Pattern Expression'.

Compressed File NameCompressed File Sub SystemFile Pattern ExpressionDirectory Pattern ExpressionExclude Dir Pattern ExpressionMatching/Processed Files
DailyCalls10.zip/DailyCalls10.csv".*Calls10.*"".* DailyCalls10.zip/DailyCalls10.csv
DailyCalls.tar/subdir1/calls10.csv
/subdir1/calls20.csv
/subdir2/calls100.csv
/subdir2/calls200.csv
/subdir3/calls1000.csv
/subdir3/calls2000.csv
".*calls10.*"".*".*subdir2.*DailyCalls.tar/subdir1/calls10.csv
DailyCalls.tar/subdir3/calls1000.csv
Outer.zip/subdir1/calls10.csv
/subdir1/calls20.csv
/subdir1/Inner.zip/innerdir/calls100.csv
/subdir1/Inner.zip/subdir2/calls1000.csv

Note that Outer.zip contains a compressed zip file called Inner.zip
".*calls10.*"".*subdir1.*".*subdir2.*Outer.zip/subdir1/calls10.csv
Outer.zip/subdir1/Inner.zip/innerdir/calls100.csv
Outer.tar.gz/Outer.tar/subdir1/calls10.csv
/Outer.tar/innerdir/calls100.csv
/Outer.tar/subdir1/Inner.zip/innerdir/calls1000.csv

Note that Outer.tar.gz contains a tar container which in turn contains a compressed zip file called Inner.zip
".*calls10.*"".*subdir1.*innerdir.* Outer.tar.gz/Outer.tar/subdir1/Inner.zip/innerdir/calls1000.csv

Form Icons

 

The form provides the standard form icons as well as the following:

Shows the model predecessors of the file collector.

 

The form also provides the following icons on the File Columns tab:

If a valid location has been configured in the file collector to locate an existing CSV file, the user can click on a button at the top of the grid to automatically create the file column descriptions in this form.

The column names are taken from the first row of the file. To construct the name all invalid characters are stripped out of the value found in each cell and the result is assumed to be the name of the column.

The remaining rows are examined to try to determine the type and length for each column definition based on the values found in the file. If a type cannot be determined then the column is defined as a string. The length of the field is set to be the length of the longest value found.

Shows the list of File Collectors.

Shows the list of Streams.

Deletes the selected object from the list.

Adds a new File Attribute. See the File Collector Attribute Details form.

See Also

  • No labels