initiating a machine learning script on a remote server

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

initiating a machine learning script on a remote server

Mike Sofen

I’ve been prototyping various functionality on nifi, initially on a Windows laptop, now on a single GCP Linux instance (for now), using the more basic processors for files and databases.  It’s really a superb platform.

 

What I now need to solve for is firing a python machine learning script that exists on another CPU/GPU equipped instance, as part of a pipeline that detects a new file to process, sends the file name/location to the remote server and receives the results of the processing from the server, for further actions.  We need maximum performance and robustness from this step of the processing.

 

I’ve read a bunch of posts on this and they point to using the ExecuteStreamCommand processor (vs the ExecuteProcess, since it allows inputs) but none seem show how to configure the processor to point to a remote server and execute a script that exists on that server with arguments/variables I pass in with the call.  These servers will all be GCP instances. To keep things simple, let’s ignore security for the moment and assume I own both servers.

 

Can someone point me in the right direction? Many thanks!

 

Mike Sofen

Reply | Threaded
Open this post in threaded view
|

Re: initiating a machine learning script on a remote server

Darren Govoni
Quick answer is you could just execute a ssh command to execute on the remote machine.

If you need flowfiles to go remote, nifi supports remote processor groups.

Sent from my Verizon, Samsung Galaxy smartphone