Conversation
| #[deprecated(since = "1.0.0-alpha", note = "use `Tx` instead")] | ||
| pub type MintedTx<'b> = Tx<'b>; | ||
|
|
||
| fn is_multiasset_small_enough<T>(ma: &Multiasset<T>) -> bool { |
There was a problem hiding this comment.
this depends on protocol parameters, which seem to be hard coded here, and we don't have access to in deserialization;
This might be a rule we have to enforce at a higher layer than here in pallas
There was a problem hiding this comment.
I don't think it depends on protocol parameters? This code just computes an upper bound on the size of the "compact" representation of the multiasset: https://github.com/IntersectMBO/cardano-ledger/blob/9a4c7474582860c992b2135106a88ea30e6dd6ec/eras/mary/impl/src/Cardano/Ledger/Mary/Value.hs#L698-L708.
There was a problem hiding this comment.
oh hm; this caught me off-guard, because 44 is the lovelace per bytes in the current protocol parameters, so my brain thought this was checking the minUTxO constraint;
Definitely worth some heavier comments on why this is here; in particular, we should know why it's bad if "any offset within a MultiAsset compact representation is likely to overflow Word16" and document that (here, and likely a good candidate for an article on cardano-blueprint)
There was a problem hiding this comment.
The reason is not really stated explicitly in the comments in the Value.hs module, but I think it's safe to assume that the size constraint is there because it ensures the validity of the compact representation: it contains arrays of offsets which point to policy id and asset name strings in the buffers at the end of the structure. These offsets are 16 bits, so if the structure is too big they are not representable.
There was a problem hiding this comment.
Thinking about it, this is pretty subtle/surprising; i.e. since the ledger doesn't have this constraint, you could in theory have a ledger state that is totally valid in memory, and only fails once you try to serialize/deserialize things 🤔 It just feels weird that this isn't covered by the maxUTxO size or something.
| certificates = d.decode_with(ctx)?; | ||
| }, | ||
| 5 => { | ||
| let real_withdrawals: BTreeMap<RewardAccount, Coin> = d.decode_with(ctx)?; |
There was a problem hiding this comment.
I think we should probably much more definitely make a nonemptyset btree wrapper to encapsulate this, because it'll show up a lot
There was a problem hiding this comment.
There is already a NonEmptyKeyValuePairs but the guard in its decoder is commented out... Not sure whether we should uncomment the guard or create a whole new type
There was a problem hiding this comment.
The complexity of this function scares me a lot. Yes, I would prefer having some intermediate representation (such as a key/value) that doesn't require ad-hoc decoding but also supports the flexibility to check for duplicates.
The problem with using a BTreeMap / HashMap in minicbor is that is has to be homogeneous, meaning: a single value type.
You can get around the restriction by defining a new enum TxBodyField to hold any of the known Tx body values.
There was a problem hiding this comment.
Note: I believe this to be a nicer API for writing such decoders https://github.com/pragma-org/amaru/blob/main/crates/amaru-minicbor-extra/src/decode.rs#L74-L103
| } | ||
| } | ||
|
|
||
| if let Some(map_count) = map_init { |
There was a problem hiding this comment.
There's a .is_some_and helper that might clean up this code a bit
| Value, | ||
| U8 | U16 | U32 | U64 => Coin, | ||
| (coin, multi => Multiasset) | ||
| impl<C> minicbor::Encode<C> for Value { |
There was a problem hiding this comment.
I'm probably missing something.
Why do we favor manual decoding here over the macro?
What's the benefit?
|
TBH, I'm against interleaving high-level validation logic inside the CBOR decoding. It makes sense on a block-producing node, but there are other scenarios that require more flexibility. Having said that and given the lack of a better alternative for implementing these validations, I'm ok with merging this PR once approved. On the long-term, I would much rather prefer a solution that relies on having a custom decoding context (the generic type parameter |
|
@scarmuega would you like us to take a stab at the context approach instead and see if that's terribly difficult? |
This implements a few changes to codecs of various conway types in an attempt to make them pedantically conformant to the behavior of cardano-ledger, mainly to help prevent discrepancies in behavior between cardano-node and amaru. This PR is meant to be a small chunk of work that can be reviewed and discussed to make sure we're on the right track, rather than to capture all of the behavior of every type all at once.
These tests are based on notes I took when comparing the pallas-primitives codecs with the cardano-ledger codecs. I have uploaded those notes here. (Ideally we would probably have an equivalent suite of tests in cardano-ledger.)