Skip to content

Conversation

@openvino-dev-samples
Copy link
Collaborator

image

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@openvino-dev-samples openvino-dev-samples changed the title Add Fireredtts3 notebook Add Fireredtts2 notebook Nov 14, 2025
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

@@ -0,0 +1,1312 @@
{
Copy link
Collaborator

@aleksandr-mokrov aleksandr-mokrov Nov 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line #13.    convert_fireredtts2(pt_model_path, model_path)

It returns

---------------------------------------------------------------------------
UnpicklingError                           Traceback (most recent call last)
Cell In[3], line 13
     10     get_ipython().system('git clone https://huggingface.co/FireRedTeam/FireRedTTS2 pretrained_models')
     12 model_path = "FireRedTTS2-ov"
---> 13 convert_fireredtts2(pt_model_path, model_path)

File ~/test_notebooks/fireredtts/openvino_notebooks/notebooks/fireredtts2/ov_fireredtts_helper.py:730, in convert_fireredtts2(model_id, model_path, quantization_config)
    728 print(f"⌛ {model_id} conversion started. Be patient, it may takes some time.")
    729 print("⌛ Load Original model")
--> 730 pt_model = FireRedTTS2(
    731     pretrained_dir=model_id,
    732     gen_type="dialogue",
    733     device="cpu",
    734 )
    736 print("✅ Original model successfully loaded")
    737 print("⌛ Export tokenizer and config")

File ~/test_notebooks/fireredtts/openvino_notebooks/notebooks/fireredtts2/FireRedTTS2/fireredtts2/fireredtts2.py:42, in FireRedTTS2.__init__(self, pretrained_dir, gen_type, device, use_bf16)
     40 # ==== Load Torch LLM ====
     41 llm_config = json.load(open(llm_config_path))
---> 42 self._model = load_llm_model(
     43     configs=llm_config, checkpoint_path=llm_ckpt_path, device=device
     44 )
     45 if use_bf16:
     46     if torch.cuda.is_bf16_supported():

File ~/test_notebooks/fireredtts/openvino_notebooks/notebooks/fireredtts2/FireRedTTS2/fireredtts2/llm/utils.py:436, in load_llm_model(configs, checkpoint_path, device)
    434 print(f'{checkpoint_path=}')
    435 if checkpoint_path and os.path.exists(checkpoint_path):
--> 436     state_dict = torch.load(
    437         checkpoint_path, map_location="cpu", weights_only=False
    438     )["model"]
    439     model.load_state_dict(state_dict)
    440 else:

File ~/test_notebooks/fireredtts/openvino_notebooks/venv/lib/python3.10/site-packages/torch/serialization.py:1549, in load(f, map_location, pickle_module, weights_only, mmap, **pickle_load_args)
   1547     except pickle.UnpicklingError as e:
   1548         raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
-> 1549 return _legacy_load(
   1550     opened_file, map_location, pickle_module, **pickle_load_args
   1551 )

File ~/test_notebooks/fireredtts/openvino_notebooks/venv/lib/python3.10/site-packages/torch/serialization.py:1797, in _legacy_load(f, map_location, pickle_module, **pickle_load_args)
   1794         # if not a tarfile, reset file offset and proceed
   1795         f.seek(0)
-> 1797 magic_number = pickle_module.load(f, **pickle_load_args)
   1798 if magic_number != MAGIC_NUMBER:
   1799     raise RuntimeError("Invalid magic number; corrupt file?")

UnpicklingError: invalid load key, 'v'.

git clone doesn't download llm_posttrain.pt and llm_pretrain.pt. Use from huggingface_hub import snapshot_download


Reply via ReviewNB

Copy link
Collaborator

@sbalandi sbalandi Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got during generation of AUDIO_UPSAMPLER:

OpConversionFailure: Check 'is_conversion_successful' failed at src/frontends/pytorch/src/frontend.cpp:181:
FrontEnd API failed with OpConversionFailure:
Model wasn't fully converted. Failed operations detailed log:
-- aten::cat with a message:
Exception happened during conversion of operation aten::cat with schema aten::cat(Tensor[] tensors, int dim=0) -> Tensor
Check 'is_axis_valid(axis, r)' failed at src/core/src/validation_util.cpp:332:
While validating node 'opset1::Concat Concat_155437 (opset1::Convert aten::to/Convert[0]:f32[0], util::PtFrameworkNode prim::TupleUnpack[1]:f32[?,?,?,?]) -> (dynamic[...])' with friendly_name 'Concat_155437':
Axis -2 out of the tensor rank range [-1, 0].

-- prim::ListConstruct with a message:
Exception happened during conversion of operation prim::ListConstruct with schema (no schema)
Check '(c_node)' failed at src/frontends/pytorch/src/op/list_construct.cpp:25:
FrontEnd API failed with OpConversionFailure:
[PyTorch Frontend] Translation for prim::ListConstruct support only constant inputs

Summary:
-- normalize step failed with: Exception from src/core/src/pass/graph_rewrite.cpp:298:
[ov::frontend::pytorch::pass::AtenCatToConcat] END: node: util::PtFrameworkNode aten::cat (util::PtFrameworkNode prim::ListConstruct[0]:dynamic[...], opset1::Constant 38[0]:i64[]) -> (f32[?,?,?,?]) CALLBACK HAS THROWN: Check 'is_axis_valid(axis, r)' failed at src/core/src/validation_util.cpp:332:
While validating node 'opset1::Concat Concat_160443 (opset1::Convert aten::to/Convert[0]:f32[0], util::PtFrameworkNode prim::TupleUnpack[0]:f32[?,?,?,?]) -> (dynamic[...])' with friendly_name 'Concat_160443':
Axis -2 out of the tensor rank range [-1, 0].

-- No conversion rule found for operations: prim::TupleConstruct, prim::TupleUnpack
-- Conversion is failed for: aten::cat, prim::ListConstruct

is openvino 2025.3 should work ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just tested it again, and it works with openvino==2025.3.0

@@ -0,0 +1,1312 @@
{
Copy link
Collaborator

@aleksandr-mokrov aleksandr-mokrov Nov 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line #3.    device = device_widget("CPU", ["NPU"])

Add please explicit exclude=["NPU"]


Reply via ReviewNB

)


SYMBOLS_MAPPING = {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can it be used imported from FireRedTTS2 repo ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it could, but I prefer to put everything for inference together, so user can build their demos with helper.py only.

@@ -0,0 +1,1312 @@
{
Copy link
Collaborator

@sbalandi sbalandi Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you, please, add description about converted models ? What is the purpose of this model within the overall model ? Are there any specific when converting to IR?

I mean TEXT/AUDIO EMBEDDINGS, AUDIO_DECODER , AUDIO_UPSAMPLER, AUDIO_ENCODER, DECODER_MODEL, BACKBONE_MODEL.


Reply via ReviewNB

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the comment, and I will update it later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants