Records binary records from the standard input and forwards them to a running instance of
Event Stream Processor via the Gateway interface.
The format of the data is zero or more occurrences of <Stream Handle><Raw
Binary Record>. <Stream Handle> is an
uint32_t indicating the destination stream for the record. This tool is
typically used at the end of a pipeline with the streamingconvert tool.
streamingupload -p <[<host>:]<port>/workspace-name/project-name> -c <user[:password]> [<OPTION...>]
- -c< user[:password]> Authenticates with a <user> ID and, optionally, a <password>.
If you do not provide either the <password> or
the -k or -G option,
you are prompted for the password. If Event Stream Processor
successfully authenticates with these credentials, the connection is maintained,
otherwise Event Stream Processor immediately closes the
- -b Sets byteswap mode. The raw records fed into streamingupload (via
-b) and the server to which streamingupload is
sending data have different byte orders than the architecture on which the
streamingupload client is running. The byte order of the data must
always match the byte order of the server.
- -d <N> Inserts a delay of <N> microseconds between records or transaction blocks.
- -e Encrypts traffic with Event Stream Processor via openSSL sockets.
that Event Stream Processor is started in encrypted mode to use this
- -f <timeout:finalizer> Sets a finalizer to be run. The ESP Server runs the SQL
finalizer statement (a combination of insert, update, or delete statements, separated by
semicolons) if no message is received from streamingupload within
<timeout> milliseconds. The SQL statement is also run when
- -G (optional) authenticates access to Event Stream Processor with credentials
held within a Kerberos authentication ticket. Environment variables determine where
the system will look to find authentication tickets. If the user name differs from
the default principal name in the ticket cache, specify an alternate user name with
the –c option to use the corresponding authentication ticket.
- -h Prints a list of possible options on the screen along with a brief explanation for each
<privateRsaKeyFile> Performs authentication by using the RSA private key file mechanism instead of
password authentication. The <privateRsaKeyFile> must specify the
pathname of the private RSA key file. Ensure that the
ESP Server has been started with the
-k option specifying the directory in which to store the RSA
Note With this option enabled, the user name must be specified with the
-c option, but the password is not required.
<conn_name> Sets a symbolic tag name for the connection. This allows the
streamingupload to look up the connection easily in the
_ESP_Clients metadata stream. To kill the connections by tag name, use the
- -p <[<host>:]<port>/workspace-name/project-name> (required) Together, the <host:<port>/<workspace name>/ <project
name> arguments specify the URI to connect to the
ESP Server (cluster manager). For example, if you have
started your ESP cluster server in the port 19011, the host name is
set as localhost, and you are running a project called
prj1 in the default workspace, specify -p as:
- -r <N> Uploads records or transaction blocks at a rate of <N> per second. The default
is to upload as fast as the server can absorb the data.
- -s <N> Synchronizes the source streams every <N> records or transaction blocks. This
guarantees that the records have been absorbed by the source streams. By default, source
streams are not synchronized.
- -t< size> Runs in transaction mode. Each record that streamingupload reads is buffered
on a per-stream basis. When a buffer reaches the indicated number of records, it is
wrapped as a single transaction and sent to Event Stream Processor. If
all records read are from one stream, this effectively buffers the stream into
<size> record chunks and commits them as transactions. Any buffered
records are sent as a single transaction per stream when an EOF is read.
- -w <size> Runs in envelope mode. Each record that streamingupload reads is buffered on
a per-stream basis. When a buffer reaches the indicated number of records, it is wrapped
in a single envelope and sent to Event Stream Processor. If all
records read are from one stream, this effectively buffers the stream into
<size> record chunks. Any buffered records are sent as a single
envelope per stream when an EOF is read.
- -x Upon receiving an EOF on the standard input, sends an <END OF STREAM> marker to each
stream for which data has been uploaded. If all source streams of
Event Stream Processor receive an <END OF STREAM> marker,
Event Stream Processor shuts down and exits.
- -X Forces Event Stream Processor to exit when upload completes.
- -Y <beat> Forces Event Stream Processor to exit if no data is received in
- -v Prints the streamingupload utility version.
For a description of the format of the CSV and XML input files, see
To convert all XML records in file foo.xml
to native binary format and post
them to a running instance of Event Stream Processor:
cat foo.xml | streamingconvert -p localhost:11180/default/prj1 | streamingupload -c user:pass -p localhost:11180/default/prj1
To convert all comma-separated records in the foo.csv
file to native binary
format and post them to a running instance of Event Stream Processor:
cat foo.csv | streamingconvert -d "," -p localhost:11180/default/prj1 | streamingupload -c user:pass -p localhost:11180/default/prj1
To convert all XML records in the foo.xml
file to native binary format and
post them to a running instance of Event Stream Processor on a target
machine HOST which has a differing byte order than the machine on which
cat foo.xml | streamingconvert -b -p localhost:11180/default/prj1 | streamingupload -c user:pass -b -p localhost:11180/default/prj1