Namespace.so is a service that provides ephemeral instances (Linux, and macOS) for you to use, primarily used for CI/CD workflows. Similar to Fly.io, it’s another way to not have to manage servers.
I’m currently using it to run some of my CI/CD workflows, but I want to play around with it in other ways. One, would to potentially make it a run-on-demand nixpkgs builder, similar to how I have with Fly.io, but with Fly.io I need to manage the instances myself, whereas Namespace will terminate instances after a set period of time. This reduces the monthly hosting spend.
Before I start, I don’t begrudge Namespace at all for the (current, as of date of post) inability to SFTP files directly to instances, they are working hard on a lot of things, and this was a fun little thing I wanted to do. Namespace is constantly adding/building new functionality, and I suspect this will eventually be something they add, but I wanted something to play around with now. I happily pay Namespace for their service, and am very excited to see what they add next.
I asked in the Namespace discord, if anyone had any way to do this already, and I was pointed to (croc)[https://github.com/schollz/croc] as something others have used before, but I wanted something that I didn’t need to run on the remote server in addition to locally. I’m sure there are a thousand other ways to do this, but I like to make fun little tools for myself in bash, and so this was a great excuse to play around.
Unlike with Fly.io, Namespace doesn’t support direct SSH access, so I needed to come up with a way to “SFTP” the files up to the server. These files could be configuration, binary tools, and other files needed to run whatever is needed to run what I am working on at that time.
The nsc
client allows for pseudo-shell sessions, and also running arbitrary commands, so I thought I could use that to my advantage. I originally did a naive way of copying files with: nsc ssh $machine_id 'echo "hi" > /root/test.txt'
, but that wouldn’t work well with binary files, or large text files. Due to terminal length limitations, I also had to chunk the files up, and then reassemble them on the remote machine. This means that the transfer will take longer than usual, as each chunk will need to re-establish a connection with the server. A way to speed this transfer up would be to parallelize the transfer, but that’s not in scope for right now.
#!/bin/bash
# help text
usage() {
echo "Usage: $0 -l <local_file> -r <remote_file> -m <machine_id> [-c <chunk_size>]"
echo ""
echo "This Script uses the Namespace Client 'nsc' to copy files to your instance"
echo "Note: You'll need to ensure you've logged in with 'nsc login' first"
echo ""
echo "Options:"
echo " -l <local_file> Path to the local file to be transferred"
echo " -r <remote_file> Path to the remote file to be created"
echo " -m <machine_id> Machine ID for the nsc ssh command"
echo " -c <chunk_size> Size of the chunks for splitting the base64 encoded file (default: 1k)"
echo " -h, --help Show this help message and exit"
exit 1
}
chunk_size="1k" # default chunk size
# parse args
while getopts ":l:r:m:c:h" opt; do
case ${opt} in
l )
local_file=$OPTARG
;;
r )
remote_file=$OPTARG
;;
m )
machine_id=$OPTARG
;;
c )
chunk_size=$OPTARG
;;
h )
usage
;;
\? )
usage
;;
esac
done
# check for (full) help flag
for arg in "$@"; do
if [ "$arg" == "--help" ]; then
usage
fi
done
# make sure all args are set
if [ -z "${local_file}" ] || [ -z "${remote_file}" ] || [ -z "${machine_id}" ]; then
usage
fi
# base64 encode file and split into chunks
cat $local_file | base64 > /tmp/local_file.b64
split -b $chunk_size /tmp/local_file.b64 /tmp/chunk_
# init remote file
nsc ssh $machine_id "echo -n '' > /tmp/remote_file.b64"
# FIXME: transfer chunks individually, then reassemble them on server (this would allow for parallelization)
# loop over chunks and send each one
for chunk in /tmp/chunk_*; do
chunk_content=$(cat $chunk)
nsc ssh $machine_id "echo '$chunk_content' >> /tmp/remote_file.b64"
done
# decode base64 file
nsc ssh $machine_id "base64 -d /tmp/remote_file.b64 > $remote_file"
# clean up local temp files
rm /tmp/local_file.b64 /tmp/chunk_*
echo "File transferred successfully."
Then you’d run it like:
# ensure you are logged into namespace.so
nsc login
# create a new ephemeral instance (4 cores, 8gb ram)
machine_id=$(nsc create --machine_type 4x8 --bare --output json | jq -r .cluster_id)
# copy file up to server (assuming you have chmod +x the script already)
./transfer_file.sh -l /home/tklk/Photos/nyan_cat.gif -r /root/nyan.gif -m $machine_id
A way I could’ve avoided this using this hacky bash script would be to perhaps install tailscale in the ephemeral instance, and use tailscape cp
to copy the files up, but that’s something for another day.
** Update: **
I had some extra time after writing this post, and went back to add parallelization to the script. I used gnu’s parallel
to send the chunks to the remote machine. This sped up the transfer time significantly. You’ll need to ensure you have parallel
installed on your local machine to use these adjustments as it’s not installed by default on several common OSs. Here’s an updated section that you can replace parts of the script above:
nsc ssh $machine_id "mkdir -p /tmp/chunks && echo -n '' > /tmp/remote_file.b64"
send_chunk() {
chunk=$1
chunk_name=$(basename $chunk)
chunk_content=$(cat $chunk)
nsc ssh $machine_id "echo '$chunk_content' > /tmp/chunks/$chunk_name"
}
export -f send_chunk
export machine_id
# use gnu parallel to transfer files
# FIXME: accept -j as an argument to be able to adjust the hardcoded number
# left as an exercise for the reader
find /tmp/chunk_* | parallel -j 4 send_chunk
# reassemble chunks
nsc ssh $machine_id "cat /tmp/chunks/* > /tmp/remote_file.b64 && rm -r /tmp/chunks"
# based64 decode and write to destination path
nsc ssh $machine_id "base64 -d /tmp/remote_file.b64 > $remote_file && rm /tmp/remote_file.b64"