Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade RedisAI to 1.2.7 #234

Merged
merged 8 commits into from
Oct 24, 2022
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .github/workflows/run_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,13 @@ jobs:
matrix:
os: [macos-10.15, ubuntu-20.04] # Operating systems
compiler: [8] # GNU compiler version
rai: [1.2.3, 1.2.5] # Redis AI versions
rai: [1.2.3, 1.2.5, 1.2.7] # Redis AI versions
py_v: [3.7, 3.8, 3.9] # Python versions
exclude:
- os: macos-10.15 # Do not build with Redis AI 1.2.5 on MacOS
rai: 1.2.5
- py_v: 3.7 # ONNX requires python >= 3.8
rai: 1.2.7

env:
SMARTSIM_REDISAI: ${{ matrix.rai }}
Expand Down
5 changes: 4 additions & 1 deletion doc/changelog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,16 @@ This section details changes made in the development branch that have not yet be
Description

- Fix bug in colocated database entrypoint when loading PyTorch models
- Add support for RedisAI 1.2.7, pyTorch 1.11.0, Tensorflow 2.8.0, ONNXRuntime 1.11.1

Detailed Notes

- Fix bug in colocated database entrypoint stemming from uninitialized variables. This bug affects PyTorch models being loaded into the database. (PR237_)
- The release of RedisAI 1.2.7 allows us to update support for recent versions of pyTorch, Tensorflow, and ONNX (PR234_)
- Make installation of correct Torch backend more reliable according to instruction from pyTorch

.. _PR237: https://github.com/CrayLabs/SmartSim/pull/237

.. _PR234: https://github.com/CrayLabs/SmartSim/pull/234

0.4.1
-----
Expand Down
41 changes: 15 additions & 26 deletions doc/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,36 +59,25 @@ Supported Versions
to support Windows.



SmartSim supports multiple machine learning libraries through
the use of RedisAI_. The following libraries are supported.

.. list-table:: Supported ML Libraries
:widths: 50 50 50 50
:header-rows: 1
:align: center

* - Library
- Versions
- Python Versions
- Built By Default
* - PyTorch_
- 1.7
- 3.7 - 3.9
- Yes
* - Tensorflow_ / Keras_
- 2.5.2
- 3.7 - 3.9
- Yes
* - ONNX_
- 1.9
- 3.7 - 3.9
- No
Native support for various machine learning libraries and their
versions is dictated by our dependency on RedisAI_ 1.2.7. Users
can also select RedisAI 1.2.3 or 1.2.5 (though that also limits
the version of the ML libraries).

+------------------+----------+-------------+---------------+
| RedisAI | PyTorch | Tensorflow | ONNX Runtime |
+==================+==========+=============+===============+
| 1.2.7 (default) | 1.11.0 | 2.8.0 | 1.11.1 |
| 1.2.5 | 1.9.0 | 2.6.0 | 1.9.0 |
| 1.2.3 | 1.7.0 | 2.5.2 | 1.9.0 |
+------------------+----------+-------------+---------------+

TensorFlow_ 2.0 and Keras_ are supported through graph freezing_.

ScikitLearn_ and Spark_ models are supported by SmartSim as well
through the use of the ONNX_ runtime.
through the use of the ONNX_ runtime (which is not built by
default due to issues with glibc on a variety of Linux
platforms and lack of support for Mac OS X).

.. _Spark: https://spark.apache.org/mllib/
.. _Keras: https://keras.io
Expand Down
27 changes: 18 additions & 9 deletions smartsim/_core/_cli/build.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,22 +23,31 @@


def _install_torch_from_pip(versions, device="cpu", verbose=False):

packages = []
end_point = None
# if we are on linux cpu, use the torch without CUDA
if sys.platform == "linux" and device == "cpu":
packages.append(f"torch=={versions.TORCH}+cpu")
packages.append(f"torchvision=={versions.TORCHVISION}+cpu")

if sys.platform == "darwin":
if device == "gpu":
logger.warning("GPU support is not available on Mac OS X")
# The following is deliberately left blank as there is no
# alternative package available on Mac OS X
device_suffix = ""
end_point = None

# if we are on linux cpu, either CUDA or CPU must be installed
elif sys.platform == "linux":
end_point = "https://download.pytorch.org/whl/torch_stable.html"
if device in ["gpu","cuda"] :
device_suffix = versions.TORCH_CUDA_SUFFIX
elif device == "cpu":
device_suffix = versions.TORCH_CPU_SUFFIX

# otherwise just use the version downloaded by pip
else:
packages.append(f"torch=={versions.TORCH}")
packages.append(f"torchvision=={versions.TORCHVISION}")
packages.append(f"torch=={versions.TORCH}{device_suffix}")
packages.append(f"torchvision=={versions.TORCHVISION}{device_suffix}")

pip_install(packages, end_point=end_point, verbose=verbose)


class Build:
def __init__(self):
parser = argparse.ArgumentParser()
Expand Down
39 changes: 33 additions & 6 deletions smartsim/_core/_install/buildenv.py
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,9 @@ class RedisAIVersion(Version_):

2. Used to set the default values for PyTorch, TF, and ONNX
given the SMARTSIM_REDISAI env var set by the user.

NOTE: Torch requires additional information depending on whether
CPU or GPU support is requested
"""

defaults = {
Expand All @@ -126,6 +129,8 @@ class RedisAIVersion(Version_):
"onnxmltools": "1.10.0",
"scikit-learn": "1.0.2",
"torch": "1.7.1",
"torch_cpu_suffix": "+cpu",
"torch_cuda_suffix": "+cu110",
"torchvision": "0.8.2",
},
"1.2.5": {
Expand All @@ -135,8 +140,21 @@ class RedisAIVersion(Version_):
"onnxmltools": "1.10.0",
"scikit-learn": "1.0.2",
"torch": "1.9.1",
"torch_cpu_suffix": "+cpu",
"torch_cuda_suffix": "+cu111",
"torchvision": "0.10.1",
},
"1.2.7": {
"tensorflow": "2.8.0",
"onnx": "1.11.0",
"skl2onnx": "1.11.1",
"onnxmltools": "1.11.1",
"scikit-learn": "1.1.1",
"torch": "1.11.0",
"torch_cpu_suffix": "+cpu",
"torch_cuda_suffix": "+cu113",
"torchvision": "0.12.0",
},
}
# deps are the same between the following versions
defaults["1.2.4"] = defaults["1.2.3"]
Expand All @@ -146,7 +164,7 @@ def __init__(self, vers):
if vers.startswith("1.2"):
# resolve to latest version for 1.2.x
# the str representation will still be 1.2.x
self.version = "1.2.5"
self.version = "1.2.7"
else:
raise SetupError(
f"Invalid RedisAI version {vers}. Options are {self.defaults.keys()}"
Expand Down Expand Up @@ -194,7 +212,7 @@ class Versioner:
REDIS_BRANCH = get_env("SMARTSIM_REDIS_BRANCH", REDIS)

# RedisAI
REDISAI = RedisAIVersion(get_env("SMARTSIM_REDISAI", "1.2.3"))
REDISAI = RedisAIVersion(get_env("SMARTSIM_REDISAI", "1.2.7"))
REDISAI_URL = get_env(
"SMARTSIM_REDISAI_URL", "https://github.com/RedisAI/RedisAI.git/"
)
Expand All @@ -204,6 +222,8 @@ class Versioner:
# torch can be set by the user because we download that for them
TORCH = Version_(get_env("SMARTSIM_TORCH", REDISAI.torch))
TORCHVISION = Version_(get_env("SMARTSIM_TORCHVIS", REDISAI.torchvision))
TORCH_CPU_SUFFIX = Version_(get_env("TORCH_CPU_SUFFIX", REDISAI.torch_cpu_suffix))
TORCH_CUDA_SUFFIX = Version_(get_env("TORCH_CUDA_SUFFIX", REDISAI.torch_cuda_suffix))

# TensorFlow and ONNX only use the defaults, but these are not built into
# the RedisAI package and therefore the user is free to pick other versions.
Expand Down Expand Up @@ -240,12 +260,19 @@ def ml_extras_required(self):
ml_extras = []
ml_defaults = self.REDISAI.get_defaults()

# remove torch and torch vision as they will be installed
# remove torch-related fields as they will be installed
# by the cli process for use in the RAI build. We don't install
# them here as the user needs to decide between GPU/CPU. All other
# libraries work on both devices
del ml_defaults["torch"]
del ml_defaults["torchvision"]
# libraries work on both devices. The correct versions and suffixes
# were scraped from https://pytorch.org/get-started/previous-versions/
_torch_fields = [
"torch",
"torchvision",
"torch_cpu_suffix",
"torch_cuda_suffix"
]
for field in _torch_fields:
ml_defaults.pop(field)

for lib, vers in ml_defaults.items():
ml_extras.append(f"{lib}=={vers}")
Expand Down