Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,11 +42,11 @@ jobs:
- name: Install dependencies
run: |
pip install -e .
pip install pytest==6.1.1 torchvision scikit-learn gorilla transformers torchtext matplotlib captum
pip install pytest==7.2.2 torchvision scikit-learn gorilla transformers torchtext matplotlib captum

- name: Install pytorch lightning
run: |
pip install pytorch-lightning>=1.2.3
pip install lightning>=2.0 jsonargparse[signatures]

- name: Add permissions for remove conda utility file
run: |
Expand Down
22 changes: 16 additions & 6 deletions examples/BertNewsClassification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,9 @@ Ensure to install the `mlflow-torchserve` [prerequisites](https://github.com/mlf

Install the required packages using the following command

`pip install -r requirements.txt`
```
pip install -r requirements.txt
```


### Running the code
Expand All @@ -43,7 +45,7 @@ where `X` is your desired value for `max_epochs`.
If you have the required modules for the file and would like to skip the creation of a conda environment, add the argument `--no-conda`.

```
mlflow run . --no-conda
mlflow run . --env-manager local

```

Expand Down Expand Up @@ -93,22 +95,28 @@ torchrun news_classifier.py \

create an empty directory `model_store` and run the following command to start torchserve.

`torchserve --start --model-store model_store`
```
torchserve --start --model-store model_store
```


## Creating a new deployment

Run the following command to create a new deployment named `news_classification_test`

`mlflow deployments create -t torchserve -m file:///home/ubuntu/mlflow-torchserve/examples/BertNewsClassification/models --name news_classification_test -C "MODEL_FILE=news_classifier.py" -C "HANDLER=news_classifier_handler.py"`
```
mlflow deployments create -t torchserve -m file:///home/ubuntu/mlflow-torchserve/examples/BertNewsClassification/models --name news_classification_test -C "MODEL_FILE=news_classifier.py" -C "HANDLER=news_classifier_handler.py"
```

Torchserve plugin determines the version number by itself based on the deployment name. hence, version number
is not a mandatory argument for the plugin. For example, the above command will create a deployment `news_classification_test` with version 1.

If needed, version number can also be explicitly mentioned as a config variable.


`mlflow deployments create -t torchserve -m file:///home/ubuntu/mlflow-torchserve/examples/BertNewsClassification/models --name news_classification_test -C "VERSION=1.0" -C "MODEL_FILE=news_classifier.py" -C "HANDLER=news_classifier_handler.py"`
```
mlflow deployments create -t torchserve -m file:///home/ubuntu/mlflow-torchserve/examples/BertNewsClassification/models --name news_classification_test -C "VERSION=1.0" -C "MODEL_FILE=news_classifier.py" -C "HANDLER=news_classifier_handler.py"
```

Note:
mlflow-torchserve plugin generates the mar file inside the "model_store" directory. If the `model_store` directory is not present under the current folder,
Expand All @@ -120,4 +128,6 @@ if the torchserve is already running with a different "model_store" location, en

The deployed BERT model would predict the classification of the given news text and store the output in `output.json`. Run the following command to invoke prediction of our sample input (input.json)

`mlflow deployments predict --name news_classification_test --target torchserve --input-path input.json --output-path output.json`
```
mlflow deployments predict --name news_classification_test --target torchserve --input-path input.json --output-path output.json
```
22 changes: 10 additions & 12 deletions examples/E2EBert/MLproject
Original file line number Diff line number Diff line change
Expand Up @@ -5,18 +5,16 @@ conda_env: conda.yaml
entry_points:
main:
parameters:
max_epochs: {type: int, default: 5}
devices: {type: int, default: None}
num_samples: {type: int, default: 1000}
vocab_file: {type: str, default: 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt'}
strategy: {type str, default: None}
accelerator: {type str, default: None}
max_epochs: {type: int, default: 1}
devices: {type: int, default: "auto"}
num_samples: {type: int, default: 2000}
strategy: {type str, default: "auto"}
accelerator: {type str, default: "auto"}

command: |
python news_classifier.py \
--max_epochs {max_epochs} \
--devices {devices} \
--num_samples {num_samples} \
--vocab_file {vocab_file} \
--strategy={strategy} \
--accelerator={accelerator}
--trainer.max_epochs {max_epochs} \
--trainer.devices {devices} \
--data.num_samples {num_samples} \
--trainer.strategy={strategy} \
--trainer.accelerator={accelerator}
28 changes: 20 additions & 8 deletions examples/E2EBert/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,9 @@ Ensure to install the `mlflow-torchserve` [prerequisites](https://github.com/mlf

Install the required packages using the following command

`pip install -r requirements.txt`
```
pip install -r requirements.txt
```


### Running the code
Expand All @@ -40,7 +42,7 @@ where `X` is your desired value for `max_epochs`.
If you have the required modules for the file and would like to skip the creation of a conda environment, add the argument `--no-conda`.

```
mlflow run . --no-conda
mlflow run . --env-manager local

```

Expand Down Expand Up @@ -74,27 +76,33 @@ Or to run the training script directly with custom parameters:

```
python news_classifier.py \
--max_epochs 5
--trainer.max_epochs 5
```

## Starting TorchServe

Create an empty directory `model_store` and run the following command to start TorchServe.

`torchserve --start --model-store model_store`
```
torchserve --start --model-store model_store
```

## Creating a new deployment

Run the following command to create a new deployment named `news_classification_test`

`mlflow deployments create -t torchserve -m state_dict.pth --name news_classification_test -C "MODEL_FILE=news_classifier.py" -C "HANDLER=news_classifier_handler.py" -C "EXTRA_FILES=class_mapping.json,bert_base_uncased_vocab.txt,wrapper.py"`
```
mlflow deployments create -t torchserve -m state_dict.pth --name news_classification_test -C "MODEL_FILE=news_classifier.py" -C "HANDLER=news_classifier_handler.py" -C "EXTRA_FILES=class_mapping.json,bert_base_uncased_vocab.txt,wrapper.py"
```

Note: TorchServe plugin determines the version number by itself based on the deployment name. hence, version number
is not a mandatory argument for the plugin. For example, the above command will create a deployment `news_classification_test` with version 1.

If needed, version number can also be explicitly mentioned as a config variable.

`mlflow deployments create -t torchserve -m state_dict.pth --name news_classification_test -C "VERSION=1.0" -C "MODEL_FILE=news_classifier.py" -C "HANDLER=news_classifier_handler.py" -C "EXTRA_FILES=class_mapping.json,bert_base_uncased_vocab.txt,wrapper.py"`
```
mlflow deployments create -t torchserve -m state_dict.pth --name news_classification_test -C "VERSION=1.0" -C "MODEL_FILE=news_classifier.py" -C "HANDLER=news_classifier_handler.py" -C "EXTRA_FILES=class_mapping.json,bert_base_uncased_vocab.txt,wrapper.py"
```

Note:

Expand All @@ -110,11 +118,15 @@ For testing the fine tuned model, a sample input text is placed in `input.json`

Run the following command to invoke prediction of our sample input

`mlflow deployments predict --name news_classification_test --target torchserve --input-path input.json --output-path output.json`
```
mlflow deployments predict --name news_classification_test --target torchserve --input-path input.json --output-path output.json
```

Run the following command to invoke explain of our sample input


`mlflow deployments explain --name news_classification_test --target torchserve --input-path input.json --output-path output.json`
```
mlflow deployments explain --name news_classification_test --target torchserve --input-path input.json --output-path output.json
```

All the captum Insights visualization can be seen in the jupyter notebook added in this example
10 changes: 6 additions & 4 deletions examples/E2EBert/conda.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,10 @@ dependencies:
- captum
- sklearn
- transformers
- torch>=1.9.0
- torchtext>=0.10.0
- pytorch-lightning>=1.7.6
- torchvision>=0.10.0
- portalocker
- torch>=2.0
- torchtext>=0.15.1
- lightning>=2.0
- torchvision>=0.15.1
- jsonargparse[signatures]>=4.17.0
- torchdata
Loading