Writing your own uploader
If you're writing an app on top of Shade, this is how you upload files.
We've built the Shade uploader to be as easy as possible to integrate into your own app while maximizing upload speeds.
Multipart Uploads
We recommend always using multipart uploads. We have a single part API but from S3 limitations, it only supports files up to 5GB. Multipart uploads support files up to 5TB.
Steps
1. Make sure you have a valid ShadeFS token. All calls will require this.
const resp = await axios.get(`https://api.shade.inc/workspaces/drives/${driveId}/shade-fs-token`, {
headers: {
Authorization: apiKey,
},
})
This gives back a token valid for some amount of time. We recommend decoding it manually and reading the exp
field to know when it expires:
const MUST_BE_LONGER_THAN_TO_BE_VALID = 240
const exp = jwtDecode(tokenJWT).exp
if (!exp) {
throw new Error("No exp attribute in decoded jwt")
}
if (exp - Date.now() / 1000 < MUST_BE_LONGER_THAN_TO_BE_VALID) {
// Token expires too soon, fetch another one
}
A very common mistake is continuing to use a token after it has expired, especially if you redeem beforehand and then take a long time to upload. We recommend a utility that on each usage checks the expiry on each usage. You'll see that here as tokenCacher.fetchToken()
.
2. Make sure that you've created the directories that you want to upload to. This call looks something like this:
await axios.post(
`https://fs.shade.inc/${driveId}/fs/mkdir`, {}, {
headers: {
Authorization: `Bearer ${(await tokenCacher.fetchToken())}`,
},
params: {
email,
path: directory,
drive: driveId,
}
})
Directories are created recursively, so if you pass /a/b/c/d.txt
and only /a
exists, it will create /b
, /c
.
3. Initiate the multipart upload to the destination path. This call looks something like this:
Decide on a part size! We've found 64-128mb work well for most end users
export type InitiateMultipartResponse = {
uploadId: string;
partSize: number; // Server confirmed part size
token: string; // finish token used for part presign + completion
};
const PART_SIZE = 64 * 1024 * 1024; // 64MB
const data = await axios.post(
`https://fs.shade.inc/${drive}/upload/multipart`,
{ path, PART_SIZE },
{
headers: { Authorization: `Bearer ${await tokenCacher.fetchToken()}` },
}
);
return resp.data as InitiateMultipartResponse;
4. Now we can work on the parts. For each part, you will need to presign it first:
export type PresignedPart = {
url: string;
headers?: PresignedHeaders;
};
const resp = await axios.post(
`https://fs.shade.inc/${drive}/upload/multipart/part/${partNumber}`,
{},
{
headers: { Authorization: `Bearer ${token}` },
params: { token: finishToken },
}
);
return resp.data as PresignedPart;
5. Now upload the part to the presigned URL. This call looks something like this:
// Load it into memory - this is the Bun API, node and web are slightly different.
// Streaming from disk into the requeest for memory usage! I recommend not loading the entire chunk into memory at once but this
// runs on our transfer machine that has a lot of RAM but shouldn't be done on end user's machines especailly for
// concurrent chunk uploads the memory can explode.
const buffer = await Bun.file(filePath).slice(start, endExclusive).arrayBuffer()
const presigned = presignResponse as PresignedPart;
// Post the data and gather the ETag
const resp = await axios.put(presigned.url, buffer, {
headers: {
"Content-Length": chunkLen.toString(),
...(presigned.headers ?? {}),
},
maxBodyLength: Infinity,
maxContentLength: Infinity,
validateStatus: (s) => s >= 200 && s < 300,
});
const etag = resp.headers["etag"] as string | undefined;
if (!etag) throw new Error("Missing ETag on UploadPart response");
return etag;
6. Now just complete the upload
const resp = await axios.post(
`https://fs.shade.inc/${drive}/upload/multipart/complete`,
{ parts },
{
headers: { Authorization: `Bearer ${token}` },
params: { token: finishToken },
}
);
if (resp.status >= 200 && resp.status < 300) return;
throw new Error(`completeMultipart ${resp.status}`);
Example
Putting it all together, you get something like this to control your upload.
You can make it faster by parallelizing this function and parallelizing the part uploads.
/*
Here is a portion of from our own uploader tool, written in Typescript using a Bun runtime.
The below script is the driver for the above functions.
*/
export async function uploadMultipart(
tokenCacher: TokenCacher,
localPath: string,
destPath: string,
) {
const token = await tokenCacher.fetchToken();
const decoded: any = jwtDecode(token);
const drive: string = decoded.aud;
const userEmail: string = decoded.sub;
if (!userEmail || !drive || Array.isArray(drive)) throw new Error("Bad token: missing sub/aud");
// Ensure destination directories exist server-side
await makeDirectories(tokenCacher, drive, userEmail, destPath);
// File info
const {size: fileSize} = await promises.stat(localPath);
const totalParts = Math.ceil(fileSize / PART_SIZE);
// 1) Initiate
const init = await initiateMultipartUpload(tokenCacher, drive, destPath, PART_SIZE);
const finishToken = init.token;
// 2) Upload each part sequentially
const completed: CompletePart[] = [];
for (let partNumber = 1; partNumber <= totalParts; partNumber++) {
const start = (partNumber - 1) * PART_SIZE;
const endExclusive = Math.min(start + PART_SIZE, fileSize);
const presigned = await presignPart(tokenCacher, drive, partNumber, finishToken);
const etag = await uploadPart(presigned, localPath, start, endExclusive);
completed.push({PartNumber: partNumber, ETag: etag});
}
// 3) Complete
await completeMultipart(tokenCacher, drive, finishToken, completed);
}
Last updated