Easy multipart uploads for Amazon S3, DigitalOcean Spaces and S3-compatible services. Available as a CLI and Go library.
# Set your secrets, region and endpoint in the environment
export ACCESS_KEY="..."
export SECRET_KEY="..."
export ENDPOINT="sfo2.digitaloceanspaces.com" # for AWS set REGION
# Pipe in data or redirect in a file
pipedream --bucket images --path pets/puppy.jpg < puppy.jpg
# Get fancy
export now=$(date +"%Y-%m-%d_%H:%M:%S_%Z")
cat /data/dump.rdb | gzip | pipedream --bucket backups --path dump-$now.rdb.gz
# For more info
pipedream -h
Download a build from the releases page. macOS, Linux and Windows builds are available for a variety of architectures.
macOS users can also use Homebrew:
brew tap meowgorithm/tap && brew install meowgorithm/tap/pipedream
Or you can just use go get
:
go get github.com/meowgorithm/pipedream/pipedream
The library uses an event based model, sending events through a channel.
import "github.com/meowgorithm/pipedream"
// Create a new multipart upload object
m := pipedream.MultipartUpload{
AccessKey: os.Getenv("ACCESS_KEY"),
SecretKey: os.Getenv("SECRET_KEY"),
Endpoint: "sfo2.digitaloceanspaces.com", // you could use Region for AWS
Bucket: "my-fave-bucket",
}
// Get an io.Reader, like an *os.File or os.Stdout
f, err := os.Open("big-redis-dump.rdb")
if err != nil {
fmt.Printf("Rats: %v\n", err)
os.Exit(1)
}
defer f.Close()
// Send up the data. Pipdream returns a channel where you can listen for events
ch := m.Send(f, "backups/dump.rdb")
done := make(chan struct{})
// Listen for activity. For more detailed reporting, see the docs
go func() {
for {
e := <-ch
switch e.(type) {
case pipedream.Complete:
fmt.Println("It worked!")
close(done)
return
case pipedream.Error:
fmt.Println("Rats, it didn't work.")
close(done)
return
}
}
}()
<-done
Full source of this example. For an example with more detailed reporting, see the source code in the CLI.
Thanks to to Apoorva Manjunath‘s S3 multipart upload example for the S3 implementation details.