This repository is deprecated and no longer maintained. The code has been migrated to our semi-mono repository at https://github.com/lablup/backend.ai.
Backend.AI Storage Proxy is an RPC daemon to manage vfolders used in Backend.AI agent, with quota and storage-specific optimization support.
ai.backend.storage
server
: The agent daemon which communicates between Backend.AI Managerapi.client
: The client-facing API to handle tus.io server-side protocol for uploads and ranged HTTP queries for downloads.api.manager
: The manager-facing (internal) API to provide abstraction of volumes and separation of the hardware resources for volume and file operations.vfs
- The minimal fallback backend which only uses the standard Linux filesystem interfaces
xfs
- XFS-optimized backend with a small daemon to manage XFS project IDs for quota limits
agent
: Implementation ofAbstractVolumeAgent
with XFS support
purestorage
- PureStorage's FlashBlade-optimized backend with RapidFile Toolkit (formerly PureTools)
netapp
- NetApp QTree integration backend based on the NetApp ONTAP REST API
cephfs
(TODO)- CephFS-optimized backend with quota limit support
- Python 3.8 or higher with pyenv and pyenv-virtualenv (optional but recommneded)
First, prepare the source clone of this agent:
# git clone https://github.com/lablup/backend.ai-storage-agent
From now on, let's assume all shell commands are executed inside the virtualenv.
Now install dependencies:
# pip install -U -r requirements/dist.txt # for deployment
# pip install -U -r requirements/dev.txt # for development
Then, copy halfstack.toml to root of the project folder and edit to match your machine:
# cp config/sample.toml storage-proxy.toml
When done, start storage server:
# python -m ai.backend.storage.server
It will start Storage Proxy daemon bound at 127.0.0.1:6021
(client API) and
127.0.0.1:6022
(manager API).
NOTE: Depending on the backend, the server may require to be run as root.
To get performance boosts by using OS-provided sendfile()
syscall
for file transfers, SSL termination should be handled by reverse-proxies
such as nginx and the storage proxy daemon itself should be run without SSL.
- User account permission to access for the given directory
- Make sure a directory such as
/vfroot/vfs
a directory or you want to mount exists
- Make sure a directory such as
- Local device mounted under
/vfroot
- Native support for XFS filesystem
- Mounting XFS volume with an option
-o pquota
to enable project quota - To turn on quotas on the root filesystem, the quota mount flags must be
set with the
rootflags=
boot parameter. Usually, this is not recommended.
- Mounting XFS volume with an option
- Access to root privilege
- Execution of
xfs_quota
, which performs quota-related commands, requires theroot
privilege. - Thus, you need to start the Storage-Proxy service by a
root
user or a user with passwordless sudo access. - If the root user starts the Storage-Proxy, the owner of every file created is also root. In some situations, this would not be the desired setting. In that case, it might be better to start the service with a regular user with passwordless sudo privilege.
- Execution of
Create a virtual block device mounted to lo
(loopback) if you are the only one
to use the storage for testing:
- Create file with your desired size
# dd if=/dev/zero of=xfs_test.img bs=1G count=100
- Make file as XFS partition
# mkfs.xfs xfs_test.img
- Mount it to loopback
# export LODEVICE=$(losetup -f)
# losetup $LODEVICE xfs_test.img
- Create mount point and mount loopback device, with pquota option
# mkdir -p /vfroot/xfs
# mount -o loop -o pquota $LODEVICE /vfroot/xfs
XFS keeps quota mapping information on two files: /etc/projects
and
/etc/projid
. If they are deleted or damaged in any way, per-directory quota
information will also be lost. So, it is crucial not to delete them
accidentally. If possible, it is a good idea to backup them to a different disk
or NFS.
- NFSv3 export mounted under
/vfroot
- Purity API access
- FUSE export mounted unde
/vfroot
- NFSv3 export mounted under
/vfroot
- NetApp ONTAP API access
- native NetApp XCP or Dockerized NetApp XCP container
- To install NetApp XCP, please refer NetApp XCP install guide
- Create Qtree in Volume explicitly using NetApp ONTAP Sysmgr GUI
The volume host of Backend.AI Storage proxy corresponds to Qtree of NetApp ONTAP, not NetApp ONTAP Volume.
Please DO NOT remove Backend.AI mapped qtree in NetApp ONTAP Sysmgr GUI. If not, you cannot access to NetApp ONTAP Volume through Backend.AI.
NOTE:
Qtree name in configuration file(storage-proxy.toml
) must have the same name created in NetApp ONTAP Sysmgr.