-
Notifications
You must be signed in to change notification settings - Fork 452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Public decentral markets with privacy for traders #2887
Comments
ToDo: system architecture figure + initial system design that links Tunnel community, orderbook in Python, relaying, spam-prevention, etc.. |
Please document DDoS problem (100Mbps for $5/month). Problem of DDoS with Tor-based orderbook relays. Prototype: back to 2013 code; proxies in network route your traffic. No Chaum remixing or Onion crypto. Trivial to match traffic with sniffing. |
Related work: Bitsquare (https://bitsquare.io): They seem to use Tor together with mainnet. |
Current idea to prevent bid/ask spam is to either use a cybercurrency or TrustChain (reputation based solution). Another option is to use this in combination with network latency, as documented here #2541. Build a fresh new community within Dispersy which builds a low-latency overlay with network neighbors. Each peer which you see within this community you do a ping/pong handshake to determine the network latency. A random walk across the network does not converge fast, you only randomly stumble upon close low-latency peers. A small bias dramatically will boost the speed at which you can find 10 close peers in a 10 million group of peers. For instance, with a 50% coin toss you introduce either a random peer or one of your closest top-10 peers. Due to the triangulation effect this boosts convergence. Next step is to build low-latency proxies. These tunnels are now fast and restricted to only a certain region. This addresses our problem as spam now is restricted to a certain region. Final policy to prevent spam is to combine the latency with tradechain reputation. You need both low-latency and sufficient reputation to be inserted into an orderbook. Peers with a bad latency connection need to compensate for this and buildup a higher reputation before they can start trading. ToDo: incremental improve current code. Get 1 hop proxy operational. Add low-latency bias. Current fee in Bitcoin does not enable microtransactions for bid/asks. It is $4 dollar to each KByte for 97.2% of blocks: Thus the best approach is to align all the incentives. Positive reinforcement within the ecosystem where traders with a good trade history get all the help they want. Traders without this history have an incentive to behave positively. How to solve the boostrap problem of traders with zero reputation on their traderchain? For instance, you need to help others and relay orders to buildup your reputation. |
{thoughts} |
Ongoing coding work on latency community, proxies, etc :
|
Professional trading needs to be low-latency, private and DDoS proof.
ToDo: incremental progress. Deploy latency community with 1 extra message: get_recent_seen_latencies() which shows the last end-to-end response times at Dispersy community level with the last 8? digits of each IPv4 number obfuscated. Only use a single UDP packet for this gossip reply. Next: Crawl latencies within a Gumby experiment. |
Clear target build the lowest latency overlay. Two months: Experiments finished. |
Nice progress! Next steps:
|
Thesis-level Gumby experiment:
|
Upon introduction request: Predicting what the latency would be for the requester. |
prime example of low-latency network, Bitcoin enhancement: http://bitcoinfibre.org/stats.html |
Current status. Created a Dispersy latency community; but now moved into Dispersy itself. This implementation runs on DAS5, can measure node-to-node ping times, gossip these results using a Using this collected ping times various existing network distance algorithms, such as GNP. Key challenge Instead of re-calculating the whole world state every 5 seconds we can:
Golden experiments:
|
Idea: Do real ICMP request to measure ping times without NAT puncturing. |
Master Thesis link: https://www.sharelatex.com/project/592c19a601647e1979114c42 Centralized algorithms Triangle inequality violation: Dynamic clustering Latency measurements in P2P systems Thought about incremental algorithm that recalculates the coordinates of a new peer plus his neighbors upon introduction. In normal conditions these are around 10 coordinates. With a fast walker around 30 coordinates are recalculated. A maximum number of coordinates for recalculation can be set. The coordinates set their new position based on the latencies of their neighbors. Thus when a new peer is introduced his measured latencies plus all the latencies measured of his neighbors should be send with the message. Peer introduction happens on: Idea on deleting "old" latencies: Delete "old" measured latencies after 10 walker steps are made. With a fast walker latencies are deleted after 30 walker steps. By this way the system becomes responsive to changing latencies in the system and the leaving of nodes out of the system. Idea on latency measurements: Do multiple latency measurements and average them to get a better latency measurement and to prevent outliers. Latency can vary due to temporary calculations that block the system on a node. If some measured latencies appear to be outliers, they can be deleted. Idea on metrics: Use ranking metric as described in the GNP literature. Also use relative error as new error function. Project planning: |
System model:
Status: thesis has first experiments. Ready for experiments with incremental updates and runtime measurements. X-axis of number of known latency pairs, Y-axis depicts runtime in ms of network coordinate update. Possible different curves for accuracy settings. |
Status: Have a working incremental model. Next steps: Experiments and tweak current model.
Latency sharing gives the possibility to report false latencies, message delaying. Possible solutions give some protection but not full protection. Writing on the report. |
Dataset: Cornell-King 2500 x 2500 node Latency https://www.cs.cornell.edu/people/egs/meridian/data.php Current thesis status: chapter focus fixed. Next step: solid experiment, focus on the core, explore trade-off accuracy and computational time, write 1-3 pages, already polished thesis style.
|
Current status: Experiment one, two and three run. Proposed next steps: Add more settings, Experiment four, Experiments with decentralized market. |
|
Delft_University_of_Technology_Thesis_and_Report.pdf Status: Experiment 3 and four done. Clean and ready to deploy code. Proposal: Continue with writing. |
ToDo: First problem description chapter. With privacy and trading plus related work, state-of-the-art, and incremental algorithms. |
Current status: Next steps:
|
Quick comments:
|
I think the title of this issue is outdated (the focus of this thesis has changed over time)? |
Thesis progress:
|
|
|
|
please fix: "In the default setting in the low latency overlay latency information
Currently implemented:
Proof of running code experiment:
|
Thnx for the thesis update! Getting a 100% working system, due to good predictive dataset? |
Completed: final master thesis report |
Financial markets offer significant privacy to trading firms.
Leakage of market positions and trade history offers a competitive advantage.
So traders will only operate on decentral markets if their privacy is protected. Regulators have obviously more access.
Builds upon: #2559
The text was updated successfully, but these errors were encountered: