Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement structure encoding #31

Closed
wants to merge 9 commits into from
Closed

Implement structure encoding #31

wants to merge 9 commits into from

Conversation

mojoX911
Copy link
Contributor

@mojoX911 mojoX911 commented Aug 1, 2021

This PR implements our own Encoding/Decoding using bitcoin::consensus::encode::Encodable traits.

I decided to go with our custom traits because that gives us flexibility and we need encoding for stuffs that isn't covered by rust-bitcoin (like Signature, Publickey etc).

The commits look bulky, but its mostly straight forward change.

Related #19

There is a typo in function name serialisable, this is intentional to avoid conflict with serde::serializable and will be fixed once we move away from serde completely.

@codecov-commenter
Copy link

codecov-commenter commented Aug 1, 2021

Codecov Report

Merging #31 (46d5098) into master (171a386) will increase coverage by 1.18%.
The diff coverage is 91.53%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #31      +/-   ##
==========================================
+ Coverage   78.71%   79.89%   +1.18%     
==========================================
  Files           8        9       +1     
  Lines        2720     3263     +543     
==========================================
+ Hits         2141     2607     +466     
- Misses        579      656      +77     
Impacted Files Coverage Δ
src/main.rs 42.90% <ø> (-0.46%) ⬇️
src/serialization.rs 71.26% <71.26%> (ø)
src/messages.rs 95.58% <95.20%> (+8.67%) ⬆️
src/taker_protocol.rs 91.83% <0.00%> (-3.09%) ⬇️
src/contracts.rs 83.67% <0.00%> (-2.60%) ⬇️
src/wallet_sync.rs 74.06% <0.00%> (-0.62%) ⬇️
src/offerbook_sync.rs 82.60% <0.00%> (-0.49%) ⬇️
src/maker_protocol.rs 78.93% <0.00%> (-0.11%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 171a386...46d5098. Read the comment docs.

src/serialization.rs Outdated Show resolved Hide resolved
src/serialization.rs Outdated Show resolved Hide resolved
src/serialization.rs Outdated Show resolved Hide resolved
src/serialization.rs Outdated Show resolved Hide resolved
src/serialization.rs Outdated Show resolved Hide resolved
@GeneFerneau
Copy link
Contributor

There is a typo in function name serialisable, this is intentional to avoid conflict with serde::serializable and will be fixed once we move away from serde completely.

What is the purpose of moving away from serde? It's a pretty common crate for serialization across not only rust-bitcoin crates, but the entire Rust ecosystem. What gains do you see in moving away from it?

I decided to go with our custom traits because that gives us flexibility and we need encoding for stuffs that isn't covered by rust-bitcoin (like Signature, Publickey etc).

rust-bitcoin and rust-secp256k1 have serialization impls for nearly everything you have in src/serialization.rs. Are the impls just convenience tools over the implementations provided upstream?

What extra flexibility is provided by the custom impls?

@mojoX911
Copy link
Contributor Author

mojoX911 commented Aug 6, 2021

Thanks @GeneFerneau for the review.

As far as my understanding serde implementations in rust-bitcoin are done for human readable representation of structure data, like json or yaml. This is useful when writing wallets that displays and store data for user. It also requires the user of the lib to implement their own Serializer or use existing serializer like serde_json.

While the consensus en/decode are provided to produce byte array representation of structures following bitcoin's consensus. These are used mostly in network communication. Which is what we are aiming for here.

We are already using serde for everything. We don't need rust-bitcoin's serde to do it again. The intent of this PR is to provide byte encoding of message structure following bitcoin's consensus (there might be other more efficient byte encoding out there, but mostly I have seen other projects using bitcoin's encoding).

The reason we want a separate implementation is because bitcoin's encoding doesn't provide all the primitives. Probably because they are not much used in bitcoin network messages. But we need those primitive in our messages, and we also need to extend them to derived structures (like Vec<Signature>). Rust won't allow us to implementation foreign traits on foreign types, thus the new implementation.

regarding the name I don't think there is any need to have serde serialization derived for message structures if we are going for bitcoin's encoding. So we should be able to use "serialize" for everything eventually.

The entire purpose of having a custom byte serialization is to have only one ser/deserialize function that covers everything in the library. Otherwise you have to keep track on which structures are serializable by with method, because rust wont allow using foreign traits flexibily, for dep consistency reasons.

So the wrong name is transitory, if its bothering too much we can use explicit declaration everywhere to remove the conflict.

@GeneFerneau
Copy link
Contributor

The entire purpose of having a custom byte serialization is to have only one ser/deserialize function that covers everything in the library.

That makes sense. Now I understand better having local trait(s) to cover everything in the crate, instead of using the stuff already present in rust-bitcoin and rust-secp256k1.

So the wrong name is transitory, if its bothering too much we can use explicit declaration everywhere to remove the conflict.

If changing the way everything in the crate is serialized/encoded, doing it in one go makes more sense to me than using an intentionally misspelled function name. "The most permanent things are temporary solutions..." - someone on the internet probably

I don't really understand your reasoning on the misspelled function, if using local traits for serialization, there should be no conflicts. Am I missing something?

The intent of this PR is to provide byte encoding of message structure following bitcoin's consensus

This seems reasonable for stuff that needs to follow consensus (inclusion in blocks), but for communication rounds something like CBOR probably fits better.

To distinguish between the two, you could add a net_serialize function to Serializable for over-the-wire encoding, and use serialize to mean "consensus encode". What do you think?

@mojoX911
Copy link
Contributor Author

mojoX911 commented Aug 12, 2021

I don't really understand your reasoning on the misspelled function, if using local traits for serialization, there should be no conflicts. Am I missing something?

There will still be conflicts if the local trait function name matches with some external function name. In this case Serializable::serialize and Serde::serialize conflicts. There are following options to fix this.

  • We could remove serde serialisation in this PR itself, but that would make it very hard to review.
  • We could use explicite name everywhere, but then we would need to change it back once conflict is resolved.
  • Or we can use a different name that doesn't conflict.

Feel free to suggest one and I will update the PR.

This seems reasonable for stuff that needs to follow consensus (inclusion in blocks), but for communication rounds something like CBOR probably fits better.

The term "consensus" here is not for validation (inclusion of blocks and other stuffs). Validation doesn't need encoding, that's done with deserialized data. The the term is for network communication. It means the serialization format that other bitcoin nodes can understand too, i.e, the format that is in "consensus" with the current bitcoin network. Bad encoding of correct data is also a "consensus failure" in the network. :) (Ya it gets weird with terminologies in Bitcoin)

We will be sending around tx and other data through our connections, and if we for some reason need to broadcast a transaction in the bitcoin p2p network, we need it to be serialized as per "bitcoin consensus". That's the job of consensus_encode().

So the rationale here is, if we are using byte serialization, its better to follow the bitcoin protocol for a bitcoin app, and remove future interoperability issue with regular bitcoin nodes.

@GeneFerneau
Copy link
Contributor

GeneFerneau commented Aug 13, 2021

We will be sending around tx and other data through our connections, and if we for some reason need to broadcast a transaction in the bitcoin p2p network, we need it to be serialized as per "bitcoin consensus". That's the job of consensus_encode().

I understand that, and that is what I meant: use net_serialize for places where consensus_encode needs to be called.

Feel free to suggest one and I will update the PR.

Since we're not importing upstream implementations of serde::Serialize, why can't the local impls you have in this PR just use that trait? Then, for the stuff that needs to use consensus_encode, something like:

pub trait NetSerialize {
    fn net_serialize(...) {
        // ...
        consensus_encode(...);
    }

    fn net_deserialize(...) {
        // ...
        consensus_decode(...);
    }
}

@mojoX911
Copy link
Contributor Author

mojoX911 commented Aug 16, 2021

Since we're not importing upstream implementations of serde::Serialize,

We are using serde:Serlialize here

#[derive(Debug, Serialize, Deserialize)]

Which I didn't remove in this PR, would be done on a subsequent one if required. This causes the name conflict.

Ya net-serialize sounds good to me.

Updated with new name.

@GeneFerneau
Copy link
Contributor

GeneFerneau commented Aug 16, 2021

Which I didn't remove in this PR, would be done on a subsequent one if required. This causes the name conflict.

I would recommend doing your custom serde impl, and removing the automatic derive that causes the conflict. Unless the NetSerialize impls take care of all the conflicts.

@GeneFerneau
Copy link
Contributor

Overall, changes look great! I've done another quick review on b05b838.

A couple nits, but the code looks really solid overall.

@mojoX911
Copy link
Contributor Author

mojoX911 commented Aug 17, 2021

Which I didn't remove in this PR, would be done on a subsequent one if required. This causes the name conflict.

I would recommend doing your custom serde impl, and removing the automatic derive that causes the conflict. Unless the NetSerialize impls take care of all the conflicts.

Ya if we just remove the derive macros, The name conflicts will be gone. But then we would also need to replace all serde calls inside the protocol. I plan on doing that after this PR is merged, subjected review by @chris-belcher.

Could have done it here also, but that would create a change set I feel too big for a single PR.

@mojoX911
Copy link
Contributor Author

I am also wondering if its better to have net-serialization impls in the serialization.rs than in the messgae.rs itself, from code oragnisation perspective.

Is it better to put encoding impls in a single module or in each structures own module? @GeneFerneau @chris-belcher let me know if you have any thought/prefs.

I have seen both of them used in different projects.

@GeneFerneau
Copy link
Contributor

GeneFerneau commented Aug 17, 2021

Is it better to put encoding impls in a single module or in each structures own module?

IMHO, having all serialization trait definitions in serialization.rs makes sense, with the impls in the same module as the struct definition (e.g. all your impls in messages.rs). Of course, will defer to @chris-belcher on this, though.

src/serialization.rs Outdated Show resolved Hide resolved
src/serialization.rs Outdated Show resolved Hide resolved
src/serialization.rs Outdated Show resolved Hide resolved
src/serialization.rs Outdated Show resolved Hide resolved
@chris-belcher
Copy link
Contributor

Not sure about the concept of the PR now that I just read about serde_cbor. Could we use that instead? What are the upsides/downsides

@mojoX911
Copy link
Contributor Author

Not sure about the concept of the PR now that I just read about serde_cbor. Could we use that instead? What are the upsides/downsides

We can. Then we have to consider the following situations

  • Our encoding will diverge from bitcoin's network encodings. If we decide to send our transactions and other data to some other apps in future, there would be interoperability issues. They all have to understand cbor. In practice I have encountered most apps use bitcoin's network encdoing. That's kinda the defacto standard.
  • We will need to implement cbor for all bitcoin primitive types. Here we get most of stuffs for free.

In principle they are both similar and solves our purpose. My only worry is interoperability in future.

@mojoX911
Copy link
Contributor Author

mojoX911 commented Aug 21, 2021

Rebased and Updated with nit comments.

@GeneFerneau
Copy link
Contributor

In principle they are both similar and solves our purpose. My only worry is interoperability in future.

Both points you raise make sense to me. The main reason I brought up CBOR was for use-cases where JSON was being used for serialization, not consensus encoding. I agree for consensus encoding uses, it is more reasonable to use the de-facto standard (consensus_{de,en}code).

If there are other cases where JSON serialization is used, I would advocate for CBOR, especially if the bytes are going over the network.

@chris-belcher
Copy link
Contributor

Our encoding will diverge from bitcoin's network encodings. If we decide to send our transactions and other data to some other apps in future, there would be interoperability issues

JoinMarket has the same taker/maker setup and in its several years of existence there's never been a need for other apps to implement the taker/maker protocol. I don't think we'll ever end up in a situation where other apps are talking the teleport protocol between makers and takers. Interoperability is not an issue at all since it's overwhelmingly likely that this app will only ever talk to other instances of itself.

We will need to implement cbor for all bitcoin primitive types. Here we get most of stuffs for free.

Is it possible to write something like #[derive(Debug, Serialize, Deserialize)] which does it all automatically? That's how it's done with serde.

@GeneFerneau
Copy link
Contributor

Is it possible to write something like #[derive(Debug, Serialize, Deserialize)] which does it all automatically? That's how it's done with serde.

Looks like serde_cbor works the same way, and uses serde's Serialize/Deserialize traits: https://docs.rs/serde_cbor/0.11.2/serde_cbor/index.html#type-based-serialization-and-deserialization

@mojoX911
Copy link
Contributor Author

Interoperability is not an issue at all since it's overwhelmingly likely that this app will only ever talk to other instances of itself.

In that case we can use whatever encoding we want, and its better to not worry about interoperability and optimize on other aspects.

Is it possible to write something like #[derive(Debug, Serialize, Deserialize)] which does it all automatically? That's how it's done with serde.

I am not sure, my understanding is serde::json is already implemented in rust-bitcoin, so that's why we could use its as derive macros here. But serde::cbor is not. I am not too familiar with how serde works, but my guess is we would need to have some serde::cbor impls for primitive bitcoin types.

I will try to see if direct derivation works for cbor.

@GeneFerneau
Copy link
Contributor

I am not too familiar with how serde works, but my guess is we would need to have some serde::cbor impls for primitive bitcoin types.

AFAICT, the Serialize/Deserialize traits are general across all serde implementations. So when you implement actually reading/writing values, you would use something like serde_cbor::to_vec(&some_struct_that_impls_Serialize), instead of the serde_json equivalent. Similar for reading values, you would use serde_cbor::Value.

@mojoX911
Copy link
Contributor Author

mojoX911 commented Sep 1, 2021

Thanks @GeneFerneau , that seems very convenient. I will try that and see if it works.

I agree that it's better to have simple encoding than define our own if we don't need to follow any specific protocol.

Let me know if you have already tried. Also happy to review a PR if you did.

@mojoX911
Copy link
Contributor Author

mojoX911 commented Sep 5, 2021

I have been playing around with cbor (both serde and other alternatives). So far they can't seem to handle our object serializations, and I am not sure why.

Here's a demonstration

    #[test]
    fn test2() {
        let expected_pubkey = PublicKey::from_str(
            "03bf98c86c3d536136378cf43ac42861ece609de87f5a44e19b730e8e9bd791938",
        )
        .unwrap();

        let x = MakerToTakerMessage::Offer(Offer {
            absolute_fee: 1000,
            amount_relative_fee: 0.005,
            max_size: 200,
            min_size: 10000,
            tweakable_point: expected_pubkey,
        });

        let mut ciborum_encoded = Vec::new();
        ciborium::ser::into_writer(&x, &mut ciborum_encoded).unwrap();

        let serde_encoded = serde_cbor::to_vec(&x).unwrap();

        let message1 : MakerToTakerMessage = ciborium::de::from_reader(&ciborum_encoded[..]).unwrap();
        let message2 : MakerToTakerMessage = serde_cbor::from_slice(&serde_encoded).unwrap();
    }

I am using two different impls of cbor (serde_cbor and ciborium). In both the cases the last two unwrap will result in errors.

serde_cbor error:

running 1 test
thread 'offerbook_sync::test::test2' panicked at 'called `Result::unwrap()` on an `Err` value: ErrorImpl { code: Message("invalid value: byte array, expected an ASCII hex string"), offset: 0 }', src/offerbook_sync.rs:185:85
stack backtrace:

ceborium error

running 1 test
thread 'offerbook_sync::test::test2' panicked at 'called `Result::unwrap()` on an `Err` value: Semantic(None, "invalid type: u128, expected any value")', src/offerbook_sync.rs:184:94
stack backtrace:

@GeneFerneau
Copy link
Contributor

I am using two different impls of cbor (serde_cbor and ciborium). In both the cases the last two unwrap will result in errors.

Just reproduced the errors locally, not sure the best thing to do about it. Wasn't aware about serde_cbor locking their repo.

IMHO, think the best thing to do is keep the consensus_[de,en]code methods for now, and revisit the CBOR stuff when the libraries are more mature.

I can dig into the errors as time permits, since it seems like an internal lib error (more likely), or we are using the lib wrong (less likely). Thanks for trying to get this working.

@mojoX911
Copy link
Contributor Author

mojoX911 commented Sep 9, 2021

Yeah makes sense.. It seems CBOR deps aren't mature enough to handle our data, especially large byte numbers.

I will finalize the hand encoding way.. For now that seems like the only option.

@mojoX911
Copy link
Contributor Author

mojoX911 commented Sep 12, 2021

Rebased and fixed failing tests..

I am not sure, but one thing I am expecting to face while integrating byte encoding into the networking modules is how to separate messages without deliminator. For json we are using \n which works fine. But this will not work for byte encoding, as \n's byte value can occur within the message data.

Any suggestion or refs on how to handle such situation would be helpful. Thanks..

Update

Some refs on the above: https://stackoverflow.com/questions/13974228/how-to-place-a-delimiter-in-a-networkstream-byte-array

And yes, it doesn't seem like a trivial problem.

One option is to have our own stream decoder like they do in rust-bitcoin : https://github.com/rust-bitcoin/rust-bitcoin/blob/master/src/network/stream_reader.rs

Hmm, it's getting more complicated than I thought for byte encodings..

@GeneFerneau
Copy link
Contributor

Any suggestion or refs on how to handle such situation would be helpful. Thanks..

Like mentioned in the StackOverflow post you linked, prefix-length encoding is pretty useful for network encoding. A number of places in Bitcoin use minimally encoded integers for the length value, followed by that length of data. If I'm not wrong, this is what consensus_encode is doing.

@mojoX911
Copy link
Contributor Author

Thanks @GeneFerneau for the suggestion. I tried doing something like that. Have a VarInt in front of the message indicating how long it is. Then fill a buffer of that specific size and then try to decode the message.

For that I modified the send and read functions as below

async fn send_message(
    socket_writer: &mut WriteHalf<'_>,
    message: &MakerToTakerMessage,
) -> Result<(), Error> {
    let mut message_bytes = vec![];
    
    let len = message.net_serialize(&mut message_bytes)?;
    let var_len = VarInt(len as u64);
    
    let mut result = vec![];
    var_len.consensus_encode(&mut result).map_err(|e| Error::Serialisation(e.into()))?;
    
    result.extend_from_slice(&mut message_bytes);

    socket_writer.write_all(&mut result).await?;
    Ok(())
}

async fn read_message(reader: &mut BufReader<ReadHalf<'_>>) -> Result<TakerToMakerMessage, Error> {
    let len = read_varint(reader).await?;

    let mut buff = Vec::<u8>::with_capacity(len.0 as usize);
    reader.read_exact(&mut buff).await?;

    let message = TakerToMakerMessage::net_deserialize(&buff[..])?;

    Ok(message)
}

Along with it a custom VarInt reading function (because VarInt's consensus_decode() is non-async, so it wont wait until read buffer is filled)

async fn read_varint(reader: &mut BufReader<ReadHalf<'_>>) -> Result<VarInt, Error> {
    let n = reader.read_u8().await?;

    match n {
        0xFF => {
            let x = reader.read_u64().await?;
            if x < 0x100000000 {
                Err(self::Error::Protocol("Bad VarInt"))
            } else {
                Ok(VarInt(x))
            }
        }
        0xFE => {
            let x = reader.read_u32().await?;
            if x < 0x10000 {
                Err(self::Error::Protocol("Bad VarInt"))
            } else {
                Ok(VarInt(x as u64))
            }
        }
        0xFD => {
            let x = reader.read_u16().await?;
            if x < 0xFD {
                Err(self::Error::Protocol("Bad VarInt"))
            } else {
                Ok(VarInt(x as u64))
            }
        }
        n => Ok(VarInt(n as u64))
    }
}

This is the generic scheme of read and send used by both maker and taker protocol. Along with offerbook_sync() offer downloads.

But so far I am being hit by UnexpectedEOF error on both maker and taker side.
taker error

thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: Serialisation(ConsensusEcode(Io(Error { kind: UnexpectedEof, message: "failed to fill whole buffer" })))buff data: []
', src/offerbook_sync.rs:82:86thread '
tokio-runtime-workernote: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
' panicked at 'called `Result::unwrap()` on an `Err` value: Serialisation(ConsensusEcode(Io(Error { kind: UnexpectedEof, message: "failed to fill whole buffer" })))', src/offerbook_sync.rs:82:86

maker error

error reading from socket: Serialisation(ConsensusEcode(Io(Error { kind: UnexpectedEof, message: "failed to fill whole buffer" })))

So far it doesn't seem to be straight forward either. I suppose mainly because we are using async stuffs, and the read_exact() method in the read_message() function is hitting EOF, when it should just keep on reading.

@GeneFerneau
Copy link
Contributor

GeneFerneau commented Sep 27, 2021

I tried doing something like that

Do you have a separate branch with those changes? I tried reproducing the errors you list, but every test passes locally with your latest changes.

I suppose mainly because we are using async stuffs, and the read_exact() method in the read_message() function is hitting EOF, when it should just keep on reading

Not sure what the right answer is here, since a sync function is being called within an async reader. May help to use the read_line function instead. If you point me to a branch with your changes, I can pull them down and help debug.

Edit: maybe this section of tokio docs helps? https://tokio.rs/tokio/tutorial/io#handling-eof

Looks like maker_protocol::run and taker_protocol::read_message handle reading bytes off the wire using async, and convert to *Message types. read_line will read bytes off the wire until a newline, or the socket is closed (EOF). https://docs.rs/tokio/1.12.0/tokio/io/trait.AsyncBufReadExt.html#method.read_line

Maybe try s/read_exact/read_line in read_message, similar to current master.

@mojoX911
Copy link
Contributor Author

mojoX911 commented Sep 30, 2021

Maybe try s/read_exact/read_line in read_message, similar to current master.

read_line won't work because we can't use newline deliminator in byte encoded message. \n is just 0xA in byte, and we can't guarantee that 0xA won't exists inside the message itself. So trying to use read_line will prematurely end the read whenever it finds one 0xA.

So the approach was to use a VarInt instead of deliminator, so that we can specify upfront how much data to read.

I have pushed these changes in my branch here https://github.com/mojoX911/teleport-transactions/tree/encoding-trial

Its rough, and I commented out previous read methods, but this should explain what I tried to do.

Thanks, any suggestion on this would be very helpful.

@GeneFerneau
Copy link
Contributor

I have pushed these changes in my branch here https://github.com/mojoX911/teleport-transactions/tree/encoding-trial

Its rough, and I commented out previous read methods, but this should explain what I tried to do.

Thanks for posting your changes.

I pulled them down, and think I've found the bug: you need to set the size of the buffer you pass to read_exact.

Something like:

// either
let mut buff: Vec<u8> = Vec::with_capacity(len.0 as usize);
buff.resize(len.0 as usize, 0);
// or
let mut buff = vec![0; len.0 as usize];
reader.read_exact(&mut buff).await?;

Think there is still a hang somewhere, but that fixes the EOF issue. Make sure to size the buffer in the maker and taker read_message.

@mojoX911
Copy link
Contributor Author

mojoX911 commented Oct 10, 2021

@GeneFerneau Ah thanks a lot.. Yes that seems like the thing I am missing. Took me some time to get back to this.. Will give this a try and report back..

We need our custom serilization called `NetSerilizastion` trait, as
bitcoin's consensus encoded cannot be extended directly to our modified
coinswap structures.

Though this trait heavily dependeds on bitcoin's `consensus_encode`, as
mostly use it for primitive serialization derivation.
The custom serialization trait is implemented for all message variants.

A new `TakerToMakerMessage::TestMessage` message is created for internal
testing message (like killing a maker process from integration tests).

Unit test cases are added for message encoding/decoding roundtrips.
This is required for message encoding unit tests.
Replace the existing json text encoding communications with byte encoded
send and receive functions for maker protocol.

VarInt size demarcation is used to specify length of the message in wire
same as maker protocol, use new byte encoding instead of json text.

VarInt is used for message size specification on wire.
Offer book sync internal logic is modified to use byte encoded messages
instead of json encoding.
Finally add `serialization.rs` as crate module, and update integration
test's kill sitch.
@mojoX911
Copy link
Contributor Author

mojoX911 commented Nov 2, 2021

I have finally managed to get to the bottom of various byte encoding issue. And it seems now all is working at a satisfactory level. Although I am seeing a little lag in standard coinswap process, that can be something internal to the read methods. Will need to investigate more.

So far basic coinswap is working with byte serialization and this PR is ready to have another round of review. I have restructured to commit to make review easier, but its still a lot of code as a whole.

I was thinking about breaking it up into multiple PRs, but that might cause more headache than help. Right now its a complete change that implements and shift all the network level communication to byte encodings.

Some utility method refactoring can be used, but I would like to do that in separate smaller PRs. This one is already big enough in its scope.

Thanks @GeneFerneau for helping out with the suggestions.

@mojoX911
Copy link
Contributor Author

mojoX911 commented Nov 2, 2021

It seems for some reason codecov compilation is failing and it probably is a grcov upstream issue. Nothing related to this PR. https://github.com/bitcoin-teleport/teleport-transactions/runs/4082340098?check_suite_focus=true#step:10:20

I am investigating further.

Update

This should be fixed after #41

@chris-belcher chris-belcher force-pushed the master branch 2 times, most recently from 5c77b31 to 46e29e5 Compare May 11, 2022 09:55
@mojoX911 mojoX911 closed this by deleting the head repository May 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants