Getting Started
Tutorials for anyone who is using Relica for the first time.
Requirements
Download
Install
First run
Backups
A backup is a set of files to copy before something bad happens.
Making backups
Schedules
Backup hooks
Destinations
A destination is a place you can copy files to for safe keeping.
Destination types
Relica Cloud
Your own cloud backends
Deleting old data
Peers
A peer is another computer you can back up to.
Sharing your storage
Advanced
These articles are for power users! :)
Command line interface
Headless mode tutorial
Restoring without Relica
Accessing Relica remotely
Other
FAQ
Uninstalling Relica
Backing up to your own cloud accounts
Although Relica offers the fully-managed, highly-available Relica Cloud for reliable off-site backups, you can also back up to different cloud providers or remotes of your own as backends or destinations for your backups. This feature is included with your membership subscription at no extra cost.
Relica currently supports these providers or backends:
When you create a custom cloud destination in Relica, you'll be asked to select the provider or type of backend, give it a name, and specify the necessary configuration and credentials.
Your cloud credentials are stored in our database so that they can be accessed by all the computers on your account and to make restores available via our web interface, but they are encrypted with your encryption password so we cannot read them.
Configuration varies from provider to provider; some fields may even be left blank. Consult your provider's documentation to know which fields to set to access it properly.
Amazon S3
Field Name
Description
Bucket name
The name of the bucket.
Sub-directory
The object prefix within the bucket. (optional)
ACCESS_KEY_ID
The value of the AWS Access Key ID.
SECRET_ACCESS_KEY
The value of the AWS Secret Access Key.
REGION
The region to access the bucket.
If you choose to use a bucket policy on S3, this example policy should allow Relica to work properly (make sure to customize it):
{
"Version": "2012-10-17",
"Id": "RelicaExamplePolicy",
"Statement": [
{
"Sid": "AllowObjectManagement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::YOUR_IAM"
},
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*"
]
},
{
"Sid": "AllowBucketListings",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::YOUR_IAM"
},
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME"
]
}
]
}
Backblaze B2
Field Name
Description
Bucket name
The name of the bucket.
Sub-directory
The object prefix within the bucket. (optional)
ACCOUNT
The account ID or application key ID.
KEY
The API key or application key.
Application keys must have read+write permissions.
DigitalOcean Spaces
Field Name
Description
Space name
The name of the space.
Sub-directory
The object prefix within the bucket. (optional)
ACCESS_KEY_ID
The value of the Access Key ID.
SECRET_ACCESS_KEY
The value of the Secret Access Key.
ENDPOINT
The endpoint with which to communicate with the space.
Google Cloud Storage
Field Name
Description
Bucket name
The name of the bucket.
Sub-directory
The object prefix within the bucket. (optional)
LOCATION
The location of the bucket.
SERVICE_ACCOUNT_CREDENTIALS
The JSON body that comprises the service account credentials.
STORAGE_CLASS
The desired storage class for the objects.
Jottacloud
Field Name
Description
Remote directory
The remote directory in which to store backups.
USER
The username.
PASS
The password.
MOUNTPOINT
The mountpoint to use. Recommend "Archive".
pCloud
Field Name
Description
Remote directory
The remote directory in which to store backups.
CLIENT_ID
Your client ID.
CLIENT_SECRET
Your client secret. You will need to get these credentials from an OAuth flow.
Microsoft Azure Blob Storage
Field Name
Description
Container name
The name of the blob storage container.
Sub-directory
The directory in the container in which to store backups. (optional)
ACCOUNT
The blob storage account ID.
KEY
One of the blob storage account keys.
SFTP
Note that shell expansion is not performed in these configurations (i.e. ~
and environment variables and other bash-isms are not recognized).
Field Name
Description
Remote directory
The directory on the remote machine in which to store backups. If relative, is relative to the home directory of the user.
HOST
The remote machine's hostname or IP address.
USER
The username.
KEY_FILE
If logging in with key authentication, the path to the key file.
PASS
If logging in with password authentication, the password of the user.
PORT
The port to connect on (leave blank for default of 22).
Wasabi
Field Name
Description
Bucket name
The name of the bucket.
Sub-directory
The object prefix with which to store objects.
ACCESS_KEY_ID
The IAM user Access Key ID.
SECRET_ACCESS_KEY
The IAM user Secret Access Key.
ENDPOINT
Should usually be s3.wasabisys.com
.
REGION
The configured region.
WebDAV
Field Name
Description
Remote directory
The directory on the remote server in which to store backups.
URL
The base URL at which to access the WebDAV server.
VENDOR
If using nextcloud
, owncloud
, or sharepoint
, then one of those values. Otherwise, other
.
USER
The username.
PASS
The password.
BEARER_TOKEN
To be used in place of a username and password, if applicable.
Other S3-compatible
Field Name
Description
Provider Name
The name of the S3-compatible service.
Bucket name
The name of the bucket.
Sub-directory
The object prefix within the bucket. (optional)
ACCESS_KEY_ID
The value of the IAM Access Key ID.
SECRET_ACCESS_KEY
The value of the IAM Secret Access Key.
REGION
The region to access the bucket.
ENDPOINT
The endpoint at which to access the service.
STORAGE_CLASS
The desired storage class for the objects.