What is Bitcoin? What is Blockchain? What is ECDSA? – SinavBOX

Quantum computing question

I'm thinking about Bitcoin in the long run and how safe my investments are, etc and I have a question about the quantum computing vulnerability. All I really understand is that, with quantum computing, it may be relatively easy to get the private keys to a wallet address. If this problem occurs, Bitcoin could of course hard fork to address the issue, however, wouldn't it still be too late? Even if the community nearly 100% agreed to fork from a time in the past before the first attack occurred, how could a fork possibly allow for new private keys that somehow everyone would be able to know based on their current private keys without those being compromised as well? And asking every single Bitcoin holder to move their funds immediately after a fork seems unfeasible. Isn't the time to fork for quantum resistance now, before a successful attack occurs? Can anyone explain to me definitively how my Bitcoins are protected if Quatum computers become widely available?
submitted by tballz16 to Bitcoin [link] [comments]

Bitcoin (BTC)A Peer-to-Peer Electronic Cash System.

Bitcoin (BTC)A Peer-to-Peer Electronic Cash System.
  • Bitcoin (BTC) is a peer-to-peer cryptocurrency that aims to function as a means of exchange that is independent of any central authority. BTC can be transferred electronically in a secure, verifiable, and immutable way.
  • Launched in 2009, BTC is the first virtual currency to solve the double-spending issue by timestamping transactions before broadcasting them to all of the nodes in the Bitcoin network. The Bitcoin Protocol offered a solution to the Byzantine Generals’ Problem with a blockchain network structure, a notion first created by Stuart Haber and W. Scott Stornetta in 1991.
  • Bitcoin’s whitepaper was published pseudonymously in 2008 by an individual, or a group, with the pseudonym “Satoshi Nakamoto”, whose underlying identity has still not been verified.
  • The Bitcoin protocol uses an SHA-256d-based Proof-of-Work (PoW) algorithm to reach network consensus. Its network has a target block time of 10 minutes and a maximum supply of 21 million tokens, with a decaying token emission rate. To prevent fluctuation of the block time, the network’s block difficulty is re-adjusted through an algorithm based on the past 2016 block times.
  • With a block size limit capped at 1 megabyte, the Bitcoin Protocol has supported both the Lightning Network, a second-layer infrastructure for payment channels, and Segregated Witness, a soft-fork to increase the number of transactions on a block, as solutions to network scalability.

https://preview.redd.it/s2gmpmeze3151.png?width=256&format=png&auto=webp&s=9759910dd3c4a15b83f55b827d1899fb2fdd3de1

1. What is Bitcoin (BTC)?

  • Bitcoin is a peer-to-peer cryptocurrency that aims to function as a means of exchange and is independent of any central authority. Bitcoins are transferred electronically in a secure, verifiable, and immutable way.
  • Network validators, whom are often referred to as miners, participate in the SHA-256d-based Proof-of-Work consensus mechanism to determine the next global state of the blockchain.
  • The Bitcoin protocol has a target block time of 10 minutes, and a maximum supply of 21 million tokens. The only way new bitcoins can be produced is when a block producer generates a new valid block.
  • The protocol has a token emission rate that halves every 210,000 blocks, or approximately every 4 years.
  • Unlike public blockchain infrastructures supporting the development of decentralized applications (Ethereum), the Bitcoin protocol is primarily used only for payments, and has only very limited support for smart contract-like functionalities (Bitcoin “Script” is mostly used to create certain conditions before bitcoins are used to be spent).

2. Bitcoin’s core features

For a more beginner’s introduction to Bitcoin, please visit Binance Academy’s guide to Bitcoin.

Unspent Transaction Output (UTXO) model

A UTXO transaction works like cash payment between two parties: Alice gives money to Bob and receives change (i.e., unspent amount). In comparison, blockchains like Ethereum rely on the account model.
https://preview.redd.it/t1j6anf8f3151.png?width=1601&format=png&auto=webp&s=33bd141d8f2136a6f32739c8cdc7aae2e04cbc47

Nakamoto consensus

In the Bitcoin network, anyone can join the network and become a bookkeeping service provider i.e., a validator. All validators are allowed in the race to become the block producer for the next block, yet only the first to complete a computationally heavy task will win. This feature is called Proof of Work (PoW).
The probability of any single validator to finish the task first is equal to the percentage of the total network computation power, or hash power, the validator has. For instance, a validator with 5% of the total network computation power will have a 5% chance of completing the task first, and therefore becoming the next block producer.
Since anyone can join the race, competition is prone to increase. In the early days, Bitcoin mining was mostly done by personal computer CPUs.
As of today, Bitcoin validators, or miners, have opted for dedicated and more powerful devices such as machines based on Application-Specific Integrated Circuit (“ASIC”).
Proof of Work secures the network as block producers must have spent resources external to the network (i.e., money to pay electricity), and can provide proof to other participants that they did so.
With various miners competing for block rewards, it becomes difficult for one single malicious party to gain network majority (defined as more than 51% of the network’s hash power in the Nakamoto consensus mechanism). The ability to rearrange transactions via 51% attacks indicates another feature of the Nakamoto consensus: the finality of transactions is only probabilistic.
Once a block is produced, it is then propagated by the block producer to all other validators to check on the validity of all transactions in that block. The block producer will receive rewards in the network’s native currency (i.e., bitcoin) as all validators approve the block and update their ledgers.

The blockchain

Block production

The Bitcoin protocol utilizes the Merkle tree data structure in order to organize hashes of numerous individual transactions into each block. This concept is named after Ralph Merkle, who patented it in 1979.
With the use of a Merkle tree, though each block might contain thousands of transactions, it will have the ability to combine all of their hashes and condense them into one, allowing efficient and secure verification of this group of transactions. This single hash called is a Merkle root, which is stored in the Block Header of a block. The Block Header also stores other meta information of a block, such as a hash of the previous Block Header, which enables blocks to be associated in a chain-like structure (hence the name “blockchain”).
An illustration of block production in the Bitcoin Protocol is demonstrated below.

https://preview.redd.it/m6texxicf3151.png?width=1591&format=png&auto=webp&s=f4253304912ed8370948b9c524e08fef28f1c78d

Block time and mining difficulty

Block time is the period required to create the next block in a network. As mentioned above, the node who solves the computationally intensive task will be allowed to produce the next block. Therefore, block time is directly correlated to the amount of time it takes for a node to find a solution to the task. The Bitcoin protocol sets a target block time of 10 minutes, and attempts to achieve this by introducing a variable named mining difficulty.
Mining difficulty refers to how difficult it is for the node to solve the computationally intensive task. If the network sets a high difficulty for the task, while miners have low computational power, which is often referred to as “hashrate”, it would statistically take longer for the nodes to get an answer for the task. If the difficulty is low, but miners have rather strong computational power, statistically, some nodes will be able to solve the task quickly.
Therefore, the 10 minute target block time is achieved by constantly and automatically adjusting the mining difficulty according to how much computational power there is amongst the nodes. The average block time of the network is evaluated after a certain number of blocks, and if it is greater than the expected block time, the difficulty level will decrease; if it is less than the expected block time, the difficulty level will increase.

What are orphan blocks?

In a PoW blockchain network, if the block time is too low, it would increase the likelihood of nodes producingorphan blocks, for which they would receive no reward. Orphan blocks are produced by nodes who solved the task but did not broadcast their results to the whole network the quickest due to network latency.
It takes time for a message to travel through a network, and it is entirely possible for 2 nodes to complete the task and start to broadcast their results to the network at roughly the same time, while one’s messages are received by all other nodes earlier as the node has low latency.
Imagine there is a network latency of 1 minute and a target block time of 2 minutes. A node could solve the task in around 1 minute but his message would take 1 minute to reach the rest of the nodes that are still working on the solution. While his message travels through the network, all the work done by all other nodes during that 1 minute, even if these nodes also complete the task, would go to waste. In this case, 50% of the computational power contributed to the network is wasted.
The percentage of wasted computational power would proportionally decrease if the mining difficulty were higher, as it would statistically take longer for miners to complete the task. In other words, if the mining difficulty, and therefore targeted block time is low, miners with powerful and often centralized mining facilities would get a higher chance of becoming the block producer, while the participation of weaker miners would become in vain. This introduces possible centralization and weakens the overall security of the network.
However, given a limited amount of transactions that can be stored in a block, making the block time too longwould decrease the number of transactions the network can process per second, negatively affecting network scalability.

3. Bitcoin’s additional features

Segregated Witness (SegWit)

Segregated Witness, often abbreviated as SegWit, is a protocol upgrade proposal that went live in August 2017.
SegWit separates witness signatures from transaction-related data. Witness signatures in legacy Bitcoin blocks often take more than 50% of the block size. By removing witness signatures from the transaction block, this protocol upgrade effectively increases the number of transactions that can be stored in a single block, enabling the network to handle more transactions per second. As a result, SegWit increases the scalability of Nakamoto consensus-based blockchain networks like Bitcoin and Litecoin.
SegWit also makes transactions cheaper. Since transaction fees are derived from how much data is being processed by the block producer, the more transactions that can be stored in a 1MB block, the cheaper individual transactions become.
https://preview.redd.it/depya70mf3151.png?width=1601&format=png&auto=webp&s=a6499aa2131fbf347f8ffd812930b2f7d66be48e
The legacy Bitcoin block has a block size limit of 1 megabyte, and any change on the block size would require a network hard-fork. On August 1st 2017, the first hard-fork occurred, leading to the creation of Bitcoin Cash (“BCH”), which introduced an 8 megabyte block size limit.
Conversely, Segregated Witness was a soft-fork: it never changed the transaction block size limit of the network. Instead, it added an extended block with an upper limit of 3 megabytes, which contains solely witness signatures, to the 1 megabyte block that contains only transaction data. This new block type can be processed even by nodes that have not completed the SegWit protocol upgrade.
Furthermore, the separation of witness signatures from transaction data solves the malleability issue with the original Bitcoin protocol. Without Segregated Witness, these signatures could be altered before the block is validated by miners. Indeed, alterations can be done in such a way that if the system does a mathematical check, the signature would still be valid. However, since the values in the signature are changed, the two signatures would create vastly different hash values.
For instance, if a witness signature states “6,” it has a mathematical value of 6, and would create a hash value of 12345. However, if the witness signature were changed to “06”, it would maintain a mathematical value of 6 while creating a (faulty) hash value of 67890.
Since the mathematical values are the same, the altered signature remains a valid signature. This would create a bookkeeping issue, as transactions in Nakamoto consensus-based blockchain networks are documented with these hash values, or transaction IDs. Effectively, one can alter a transaction ID to a new one, and the new ID can still be valid.
This can create many issues, as illustrated in the below example:
  1. Alice sends Bob 1 BTC, and Bob sends Merchant Carol this 1 BTC for some goods.
  2. Bob sends Carols this 1 BTC, while the transaction from Alice to Bob is not yet validated. Carol sees this incoming transaction of 1 BTC to him, and immediately ships goods to B.
  3. At the moment, the transaction from Alice to Bob is still not confirmed by the network, and Bob can change the witness signature, therefore changing this transaction ID from 12345 to 67890.
  4. Now Carol will not receive his 1 BTC, as the network looks for transaction 12345 to ensure that Bob’s wallet balance is valid.
  5. As this particular transaction ID changed from 12345 to 67890, the transaction from Bob to Carol will fail, and Bob will get his goods while still holding his BTC.
With the Segregated Witness upgrade, such instances can not happen again. This is because the witness signatures are moved outside of the transaction block into an extended block, and altering the witness signature won’t affect the transaction ID.
Since the transaction malleability issue is fixed, Segregated Witness also enables the proper functioning of second-layer scalability solutions on the Bitcoin protocol, such as the Lightning Network.

Lightning Network

Lightning Network is a second-layer micropayment solution for scalability.
Specifically, Lightning Network aims to enable near-instant and low-cost payments between merchants and customers that wish to use bitcoins.
Lightning Network was conceptualized in a whitepaper by Joseph Poon and Thaddeus Dryja in 2015. Since then, it has been implemented by multiple companies. The most prominent of them include Blockstream, Lightning Labs, and ACINQ.
A list of curated resources relevant to Lightning Network can be found here.
In the Lightning Network, if a customer wishes to transact with a merchant, both of them need to open a payment channel, which operates off the Bitcoin blockchain (i.e., off-chain vs. on-chain). None of the transaction details from this payment channel are recorded on the blockchain, and only when the channel is closed will the end result of both party’s wallet balances be updated to the blockchain. The blockchain only serves as a settlement layer for Lightning transactions.
Since all transactions done via the payment channel are conducted independently of the Nakamoto consensus, both parties involved in transactions do not need to wait for network confirmation on transactions. Instead, transacting parties would pay transaction fees to Bitcoin miners only when they decide to close the channel.
https://preview.redd.it/cy56icarf3151.png?width=1601&format=png&auto=webp&s=b239a63c6a87ec6cc1b18ce2cbd0355f8831c3a8
One limitation to the Lightning Network is that it requires a person to be online to receive transactions attributing towards him. Another limitation in user experience could be that one needs to lock up some funds every time he wishes to open a payment channel, and is only able to use that fund within the channel.
However, this does not mean he needs to create new channels every time he wishes to transact with a different person on the Lightning Network. If Alice wants to send money to Carol, but they do not have a payment channel open, they can ask Bob, who has payment channels open to both Alice and Carol, to help make that transaction. Alice will be able to send funds to Bob, and Bob to Carol. Hence, the number of “payment hubs” (i.e., Bob in the previous example) correlates with both the convenience and the usability of the Lightning Network for real-world applications.

Schnorr Signature upgrade proposal

Elliptic Curve Digital Signature Algorithm (“ECDSA”) signatures are used to sign transactions on the Bitcoin blockchain.
https://preview.redd.it/hjeqe4l7g3151.png?width=1601&format=png&auto=webp&s=8014fb08fe62ac4d91645499bc0c7e1c04c5d7c4
However, many developers now advocate for replacing ECDSA with Schnorr Signature. Once Schnorr Signatures are implemented, multiple parties can collaborate in producing a signature that is valid for the sum of their public keys.
This would primarily be beneficial for network scalability. When multiple addresses were to conduct transactions to a single address, each transaction would require their own signature. With Schnorr Signature, all these signatures would be combined into one. As a result, the network would be able to store more transactions in a single block.
https://preview.redd.it/axg3wayag3151.png?width=1601&format=png&auto=webp&s=93d958fa6b0e623caa82ca71fe457b4daa88c71e
The reduced size in signatures implies a reduced cost on transaction fees. The group of senders can split the transaction fees for that one group signature, instead of paying for one personal signature individually.
Schnorr Signature also improves network privacy and token fungibility. A third-party observer will not be able to detect if a user is sending a multi-signature transaction, since the signature will be in the same format as a single-signature transaction.

4. Economics and supply distribution

The Bitcoin protocol utilizes the Nakamoto consensus, and nodes validate blocks via Proof-of-Work mining. The bitcoin token was not pre-mined, and has a maximum supply of 21 million. The initial reward for a block was 50 BTC per block. Block mining rewards halve every 210,000 blocks. Since the average time for block production on the blockchain is 10 minutes, it implies that the block reward halving events will approximately take place every 4 years.
As of May 12th 2020, the block mining rewards are 6.25 BTC per block. Transaction fees also represent a minor revenue stream for miners.
submitted by D-platform to u/D-platform [link] [comments]

How do you justify re-mining coins that are in P2KH UTXOs?

CSW advocates for miners being allowed to spend P2KH UTXOs without a providing a valid signature matching the pubkey under the scriptPubKey key hash preimage on Bitcoin SV. How can anyone who believes in sound money support the absolute lunacy?
Source : https://medium.com/@craig_10243/fixing-op-fals-fd157899d2b7
Here are CSW notable quotes on the topic:
In having an end capacity of just under 21 million bitcoin (BCH), some bitcoin will be “lost”, but this is analogous to bullion money being lost. In time, it can be found, and returned into circulation. I cover some of the differences in a prior article. When a private key is lost, it is merely out of circulation. It may be many years, but all old addresses eventually become mine-able and can be recovered.
Returning “lost” money into circulation is a future means of miner revenue and analogous to salvage firms who seek lost bullion on ships that have sunk in the sea.
-CSW
submitted by RudiMcflanagan to bitcoinsv [link] [comments]

Bitcoin’s Security and Hash Rate Explained

Bitcoin’s Security and Hash Rate Explained
As the Bitcoin hash rate reaches new all-time highs, there’s never been a better time to discuss blockchain security and its relation to the hashing power and the Proof of Work (PoW) that feed the network. The Bitcoin system is based on a form of decentralized trust, heavily relying on cryptography. This makes its blockchain highly secure and able to be used for financial transactions and other operations requiring a trustless ledger.
Far from popular belief, cryptography dates back to thousands of years ago. The same root of the word encryption — crypt — comes from the Greek word ‘kryptos’, meaning hidden or secret. Indeed, humans have always wanted to keep some information private. The Assyrians, the Chinese, the Romans, and the Greeks, they all tried over the centuries to conceal some information like trade deals or manufacturing secrets by using symbols or ciphers carved in stone or leather. In 1900 BC, Egyptians used hieroglyphics and experts often refer to them as the first example of cryptography.
Back to our days, Bitcoin uses cryptographic technologies such as:
  1. Cryptographic hash functions (i.e. SHA-256 and RIPEMD-160)
  2. Public Key Cryptography (i.e. ECDSA — the Elliptic Curve Digital Signature Algorithm)
While Public Key Cryptography, bitcoin addresses, and digital signatures are used to provide ownership of bitcoins, the SHA-256 hash function is used to verify data and block integrity and to establish the chronological order of the blockchain. A cryptographic hash function is a mathematical function that verifies the integrity of data by transforming it into a unique unidentifiable code.
Here is a graphic example to make things more clear:

– Extract from the MOOC (Massive Open Online Course) in Digital Currencies at the University of Nicosia.
Furthermore, hash functions are used as part of the PoW algorithm, which is a prominent part of the Bitcoin mining algorithm and this is what is of more interest to understand the security of the network. Mining creates new bitcoins in each block, almost like a central bank printing new money and creates trust by ensuring that transactions are confirmed only when enough computational power is devoted to the block that contains them. More blocks mean more computation, which means more trust.
With PoW, miners compete against each other to complete transactions on the network and get rewarded. Basically they need to solve a complicated mathematical puzzle and a possibility to easily prove the solution. The more hashing power, the higher the chance to resolve the puzzle and therefore perform the proof of work. In more simple words, bitcoins exist thanks to a peer to peer network that helps validate transactions in the ledger and provides enough trust to avoid that a third party is involved in the process. It also exists because miners give it life by resolving that computational puzzle, through the mining reward incentive they are receiving.
For more info, contact Block.co directly or email at [email protected].
Tel +357 70007828
Get the latest from Block.co, like and follow us on social media:
✔️Facebook
✔️LinkedIn
✔️Twitter
✔️YouTube
✔️Medium
✔️Instagram
✔️Telegram
✔️Reddit
✔️GitHub
submitted by BlockDotCo to u/BlockDotCo [link] [comments]

Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments)

We've gone through the painstaking process of transcribing the linked interview with Steve Shadders and Daniell Connolly of the Bitcoin SV team. There is an amazing amount of information in this interview that we feel is important for businesses and miners to hear, so we believe it was important to get this is a written form. To avoid any bias, the transcript is taken almost word for word from the video, with just a few changes made for easier reading. If you see any corrections that need to be made, please let us know.
Each question is in bold, and each question and response is timestamped accordingly. You can follow along with the video here:
https://youtu.be/tPImTXFb_U8

BEGIN TRANSCRIPT:

Connor: 02:19.68,0:02:45.10
Alright so thank You Daniel and Steve for joining us. We're joined by Steve Shadders and Daniel Connolly from nChain and also the lead developers of the Satoshi’s Vision client. So Daniel and Steve do you guys just want to introduce yourselves before we kind of get started here - who are you guys and how did you get started?
Steve: 0,0:02:38.83,0:03:30.61
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.
Daniel:
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.
Connor 03:23.07,0:04:15.98
Great so we took some questions - we asked on Reddit to have people come and post their questions. We tried to take as many of those as we could and eliminate some of the duplicates, so we're gonna kind of go through each question one by one. We added some questions of our own in and we'll try and get through most of these if we can. So I think we just wanted to start out and ask, you know, Bitcoin Cash is a little bit over a year old now. Bitcoin itself is ten years old but in the past a little over a year now what has the process been like for you guys working with the multiple development teams and, you know, why is it important that the Satoshi’s vision client exists today?
Steve: 0:04:17.66,0:06:03.46
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.
Cory: 0:06:00.59,0:06:19.59
I guess moving forward now another question about the testnet - a lot of people on Reddit have been asking what the testing process for Bitcoin SV has been like, and if you guys plan on releasing any of those results from the testing?
Daniel: 0:06:19.59,0:07:55.55
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.
Steve: 0:07:50.25,0:09:50.87
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.
Daniel: 0:09:45.02,0:10:20.93
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.
Connor: 0:10:18.88,0:10:44.00
I think that was my next question I saw that you posted on Twitter about the revived Gigablock testnet initiative and so it looked like blocks bigger than 32 megabytes were being mined and propagated there, but maybe the block explorers themselves were coming down - what does that revived Gigablock test initiative look like?
Daniel: 0:10:41.62,0:11:58.34
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.
Steve: 0:11:56.48,0:13:34.25
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.
Daniel: 0:13:37.49,0:14:35.14
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.
Connor: 0:14:32.60,0:15:26.45
Obviously for Bitcoin SV one of the main things that you guys wanted to do that that some of the other developer groups weren't willing to do right now is to increase the maximum default block size to 128 megabytes. I kind of wanted to pick your brains a little bit about - a lot of the objection to either removing the box size entirely or increasing it on a larger scale is this idea of like the infinite block attack right and that kind of came through in a lot of the questions. What are your thoughts on the “infinite block attack” and is it is it something that that really exists, is it something that miners themselves should be more proactive on preventing, or I guess what are your thoughts on that attack that everyone says will happen if you uncap the block size?
Steve: 0:15:23.45,0:18:28.56
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.
Cory: 0:18:26.55,0:18:48.27
Are there any concerns with the propagation of those larger blocks? Because there's a lot of questions around you know what the practical size of scaling right now Bitcoin SV could do and the concerns around propagating those blocks across the whole network.
Steve 0:18:45.84,0:21:37.73
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.
Connor: 0:21:34.99,0:22:17.84
One of the other changes that you guys are introducing is increasing the max script size so I think right now it’s going from 201 to 500 [opcodes]. So I guess a few of the questions we got was I guess #1 like why not uncap it entirely - I think you guys said you ran into some concerns while testing that - and then #2 also specifically we had a question about how certain are you that there are no remaining n squared bugs or vulnerabilities left in script execution?
Steve: 0:22:15.50,0:25:36.79
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.
Daniel: 0:25:34.82,0:26:15.85
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.
Steve: 0:26:13.08,0:26:20.64
Yeah, and if it's in a block already it means someone else was able to validate it so…
Cory: 0,0:26:21.21,0:26:43.60
There’s a lot of discussions about the re-enabled opcodes coming – OP_MUL, OP_INVERT, OP_LSHIFT, and OP_RSHIFT up invert op l shift and op r shift you maybe explain the significance of those op codes being re-enabled?
Steve: 0:26:42.01,0:28:17.01
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.
Connor 0:28:20.42,0:29:22.62
Big Num vs 32 Bit. I've seen Daniel - I think I saw you answer this on Reddit a little while ago, but the new op codes using logical shifts and Satoshi’s version use arithmetic shifts - the general question that I think a lot of people keep bringing up is, maybe in a rhetorical way but they say why not restore it back to the way Satoshi had it exactly - what are the benefits of changing it now to operate a little bit differently?
Daniel: 0:29:18.75,0:31:12.15
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.
Steve: 0:31:10.57,0:31:51.51
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.
Daniel: 0:31:48.24,0:32:23.13
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.
Steve: 0:32:17.59,0:33:47.20
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.
Daniel: 0:33:45.85,0:34:48.97
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.
Steve: 0:34:43.27,0:35:24.23
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.
Connor: 0:35:19.30,0:35:57.49
Kind of along this similar vein, you know, Bitcoin Core introduced this concept of standard scripts, right - standard and non-standard scripts. I had pretty interesting conversation with Clemens Ley about use cases for “non-standard scripts” as they're called. I know at least one developer on Bitcoin ABC is very hesitant, or kind of pushed back on him about doing that and so what are your thoughts about non-standard scripts and the entirety of like an IsStandard check?
Steve: 0:35:58.31,0:37:35.73
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.
Daniel: 0:37:32.85,0:38:07.95
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.
Cory: 0,0:38:06.24,0:38:35.46
*Connor mentioned that there's some people that disagree with Bitcoin SV and what they're doing - a lot of questions around, you know, why November? Why implement these changes in November - they think that maybe the six-month delay might not cause a split. Well, first off what do you think about the ideas of a potential split and I guess what is the urgency for November?
Steve: 0:38:33.30,0:40:42.42
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.
Daniel: 0:40:39.79,0:41:03.08
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.
Connor: 0:41:01.55,0:41:35.61
Can you talk about DATASIGVERIFY? What are your concerns with it? The general concept that's been kind of floated around because of Ryan Charles is the idea that it's a subsidy, right - that it takes a whole megabyte and kind of crunches that down and the computation time stays the same but maybe the cost is lesser - do you kind of share his view on that or what are your concerns with it?
Daniel: 0:41:34.01,0:43:38.41
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.
Steve: 0:43:36.16,0:46:09.71
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.
Daniel: 0:46:06.77,0:47:36.65
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.
Steve: 0:47:32.78,0:48:16.44
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).
Cory: 0:48:13.61,0:48:57.63
Based on your responses I think I kinda already know the answer to this question, but there's a lot of questions about ending experimentation on Bitcoin. I was gonna kind of turn that into – with the plan that Bitcoin SV is on do you guys see like a potential one final release, you know that there's gonna be no new opcodes ever released (like maybe five years down the road we just solidify the base protocol and move forward with that) or are you guys more on the idea of being open-ended with appropriate testing that we can introduce new opcodes under appropriate testing.
Steve: 0:48:55.80,0:49:47.43
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.
Daniel: 0:49:42.47,0:50:30.3
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.
Steve: 0:50:28.19,0:51:45.29
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
submitted by The_BCH_Boys to btc [link] [comments]

I decided to post this here as I saw some questions on the QRL discord.

Is elliptic curve cryptography quantum resistant?
No. Using a quantum computer, Shor's algorithm can be used to break Elliptic Curve Digital Signature Algorithm (ECDSA). Meaning: they can derive the private key from the public key. So if they got your public key, they got your private key, and they can empty your funds. https://en.wikipedia.org/wiki/Elliptic-curve_cryptography#Quantum_computing_attacks https://eprint.iacr.org/2017/598.pdf
Why do people say that BTC is quantum resistant, while they use elliptic curve cryptography? (Here comes the idea from that never reusing a private key from elliptic curve cryptography (and public key since they form a pair) would be quantum resistant.)
Ok, just gonna start with the basics here. Your address, where you have your coins stalled, is locked by your public- private key pair. See it as your e-mail address (public key) and your password (Private key). Many people got your email address, but only you have your password. If you got your address and your password, then you can access your mail and send emails (Transactions). Now if there would be a quantum computer, people could use that to calculate your password/ private key, if they have your email address/ public key.
What is the case with BTC: they don't show your public key anywhere, untill you make a transaction. So your public key is private untill you make a transaction. How do they do that while your funds must be registered on the ledger? Wel, they only show the Hash of your public key (A hash is an outcome of an equation. Usually one-way hash functions are used, where you can not derive the original input from the output. But everytime you use the same hash function on the same original input (For example IFUHE8392ISHF), you will always get the same output (For example G). That way you can have your coins on public key IFUHE8392ISHF, while on the chain, they are on G.) So your funds are registered on the blockchain on the "Hash" of the public key. The Hash of the public key is also your "email address" in this case. So you give "G" as your address to send BTC to.
By the way, in the early days you could use your actual public key as your address. And miners would receive coins on their public key, not on the hashed public key. That is why all the Satoshi funds are vulnerable to quantum attacks even though these addresses have never been used to make transactions from. These public keys are already public instead of hashed. Also certain hard forks have exposed the public keys of unused addresses. So it's really a false sense of security that most people hang on to in the first place.
But it's actually a false sense of security over all.
Since it is impossible to derive a public key from the Hash of a public key, your coins are safe for quantum computers as long as you don't make any transaction. Now here follows the biggest misconseption: Pretty much everyone will think, great, so BTC is quantum secure! It's not that simple. Here it is important to understand two things:
1 How is a transaction sent? The owner has the private key and the public key and uses that to log into the secured environment, the wallet. This can be online or offline. Once he is in his wallet, he states how much he wants to send and to what address.
When he sends the transaction, it will be broadcasted to the blockchain network. But before the actual transaction that will be sent, it is formed into a package, created by the wallet. This happens out of sight of the sender.
That package ends up carrying roughly the following info: The public key to point to the address where the funds will be coming from, the amount that will be transferred, the public key of the address the funds will be transferred to.
Then this package caries the most important thing: a signature, created by the wallet, derived from the private- public key combination. This signature proves to the miners that you are the rightfull owner and you can send funds from that public key.
So this package is then sent out of the secure wallet environment to multiple nodes. The nodes don’t need to trust the sender or establish the sender’s "identity." And because the transaction is signed and contains no confidential information, private keys, or credentials, it can be publicly broadcast using any underlying network transport that is convenient. As long as the transaction can reach a node that will propagate it into the network, it doesn’t matter how it is transported to the first node.
2 How is a transaction confirmed/ fullfilled and registered on the blockchain?
After the transaction is sent to the network, it is ready to be processed. The nodes have a bundle of transactions to verify and register on the next block. This is done during a period called the block time. In the case of BTC that is 10 minutes.
If you comprehend the information written above, you can see that there are two moments where you can actually see the public key, while the transaction is not fullfilled and registered on the blockchain yet.
1: during the time the transaction is sent from the sender to the nodes
2: during the time the nodes verify the transaction.
This paper describes how you could hijack a transaction and make a new transaction of your own, using someone elses address to send his coins to an address you own during moment 2: the time the nodes verify the transaction:
https://arxiv.org/pdf/1710.10377.pdf
"(Unprocessed transactions) After a transaction has been broadcast to the network, but before it is placed on the blockchain it is at risk from a quantum attack. If the secret key can be derived from the broadcast public key before the transaction is placed on the blockchain, then an attacker could use this secret key to broadcast a new transaction from the same address to his own address. If the attacker then ensures that this new transaction is placed on the blockchain first, then he can effectively steal all the bitcoin behind the original address."
So this means that practically, you can't call BTC a quantum secure blockchain. Because as soon as you will touch your coins and use them for payment, or send them to another address, you will have to make a transaction and you risk a quantum attack.
Why would Nexus be any differtent?
If you ask the wrong person they will tell you "Nexus uses a combination of the Skein and Keccak algorithms which are the 2 recognized quantum resistant algorithms (keccal is used by the NSA) so instead of sha-256, Nexus has SK-1024 making it much harder to break." Which would be the same as saying BTC is quantum resistant because they use a Hashing function to hash the private key as long as no transaction is made.
No, this is their sollid try to be quantum resistant: Nexus states it's different because they have instant transactions (So there wouldn't be a period during which time the nodes verify the transaction. This period would be instant.) Also they use a particular order in which the miners verify transactions: First-In-First-Out (FIFO) (So even if instant is not instant after all, and you would be able to catch a public key and derive the private key, you would n't be able to have your transaction signed before the original one. The original one is first in line, and will therefore be confirmed first. Also for some reason Nexus has standardized fees which are burned after a transaction. So if FIFO wouldn't do the trick you would not be able to use a higher fee to get prioritized and get an earlyer confirmation.
So, during during the time the nodes verify the transaction, you would not be able to hijack a transaction. GREAT, you say? Yes, great-ish. Because there is still moment # 1: during the time the transaction is sent from the sender to the nodes. This is where network based attacks could do the trick:
There are network based attacks that can be used to delay or prevent transactions to reach nodes. In the mean time the transactions can be hijacked before they reach the nodes. And thus one could hijack the non quantum secure public keys (they are openly included in sent signed transactions) who then can be used to derive privatekeys before the original transaction is made. So this means that even if Nexus has instant transactions in FIFO order, it is totally useless, because the public key would be obtained by the attacker before they reach the nodes. Conclusion: Nexus is Nnot quantum resistant. You simply can't be without using a post quantum signature scheme.
Performing a DDoS attack or BGP routing attacks or NSA Quantum Insert attacks on a peer to peer newtork would be hard. But when provided with an opportunitiy to steal billions, hackers would find a way. For example:
https://bitcoinmagazine.com/articles/researchers-explore-eclipse-attacks-ethereum-blockchain/
For BTC:
https://eprint.iacr.org/2015/263.pdf
"An eclipse attack is a network-level attack on a blockchain, where an attacker essentially takes control of the peer-to-peer network, obscuring a node’s view of the blockchain."
That is exactly the receipe for what you would need to create extra time to find public keys and derive private keys from them. Then you could sign transactions of your own and confirm them before the originals do.
By the way, yes this seems to be fixed now, but it most definately shows it's possible. And there are other creative options. Either you stop tranasctions from the base to get out, while the sender thinks they're sent, or you blind the network and catch transactions there. There are always options, and they will be exploited when billions are at stake. The keys can also be hijacked when a transaction is sent from the users device to the blockchain network using a MITM attack. The result is the same as for network based attacks, only now you don't mess with the network itself. These attacks make it possible to 1) retrieve the original public key that is included in the transaction message. 2) Stop or delay the transaction message to arrive at the blockchain network. So, using a quantum computer, you could hijack transactions and create forged transactions, which you then send to the nodes to be confirmed before the nodes even receive the original transaction. There is nothing you could change to the Nexus network to prevent this. The only thing they can do is implement a quantum resistant signature scheme. They plan to do this in the future, like any other serious blockchain project. Yet Nexus is the only of these future quantum resistant projects to prematurely claim to be quantum resistant. There is only one way to get quantum resistancy: POST QUANTUM SIGNATURE SCHEMES. All the rest is just a shitty shortcut that won't work in the end.
(If you use this info on BTC, you will find that the 10 minutes blocktime that is used to estimate when BTC will be vulnerable for quantum attacks, can actually be more then 10 minutes if you catch the public key before the nodes receive them. This makes BTC vulnerable sooner thatn the 10 min blocktime would make you think.)
By the way, Nexus using FIFO and standadrized fees which are burned after the transaction comes with some huge downsides:
Why are WOTS+ signatures (and by extension XMSS) more quantum resistant?
First of all, this is where the top notch mathematicians work their magic. Cryptography is mostly maths. As Jackalyst puts it talking about post quantum signature schemes: "Having papers written and cryptographers review and discuss it to nauseating levels might not be important for butler, but it's really important with signature schemes and other cryptocraphic methods, as they're highly technical in nature."
If you don't believe in math, think about Einstein using math predicting things most coudldn't even emagine, let alone measure back then.
Then there is implementing it the right way into your blockchain without leaving any backdoors open.
So why is WOTS+ and by extension XMSS quantum resistant? Because math papers say so. With WOTS it would even take a quantum computer too much time to derive a private key from a public key. https://en.wikipedia.org/wiki/Hash-based_cryptography https://eprint.iacr.org/2011/484.pdf
What is WOTS+?
It's basiclally an optimized version of Lamport-signatures. WOTS+ (Winternitz one-time signature) is a hash-based, post-quantum signature scheme. So it's a post quantum signature scheme meant to be used once.
What are the risks of WOTS+?
Because each WOTS publishes some part of the private key, they rapidly become less secure as more signatures created by the same public/private key are published. The first signature won't have enough info to work with, but after two or three signatures you will be in trouble.
IOTA uses WOTS. Here's what the people over at the cryptography subreddit have to say about that:
https://www.reddit.com/crypto/comments/84c4ni/iota_signatures_private_keys_and_address_reuse/?utm_content=comments&utm_medium=user&utm_source=reddit&utm_name=u_QRCollector
With the article:
http://blog.lekkertech.net/blog/2018/03/07/iota-signatures/
Mochimo uses WOTS+. They kinda solved the problem: A transaction consists of a "Source Address", a "Destination Address" and a "Change Address". When you transact to a Destination Address, any remaining funds in your Source Address will move to the Change Address. To transact again, your Change Address then becomes your Source Address.
But what if someone already has your first address and is unaware of the fact you already send funds from that address? He might just send funds there. (I mean in a business environment this would make Mochimo highly impractical.) They need to solve that. Who knows, it's still a young project. But then again, for some reason they also use FIFO and fixed fees, so there I have the same objections as for Nexus.
How is XMSS different?
XMSS uses WOTS in a way that you can actually reuse your address. WOTS creates a quantum resistant one time signature and XMSS creates a tree of those signatures attached to one address so that the address can be reused for sending an asset.
submitted by QRCollector to QRL [link] [comments]

Are G. Andrew Stone and Roger Ver actually against fungibility in BTC?

Bitcoin's awaited privacy upgrade requires Schnorr signatures, which cannot be implemented into bitcoin without SegWit.
More about Schnorr sigs for fungibility via Bitcoin Magazine:
With the impending release of Segregated Witness, implementation of the Schnorr cryptographic signature algorithm might follow soon after, potentially improving Bitcoin's scalability, efficiency and privacy, all in one go.
Many cryptographers consider Schnorr signatures the best in the field, as they offer a strong level of correctness, do not suffer from malleability, are relatively fast to verify, and ‒ importantly ‒ support multisignature: several signatures can be aggregated into a single, new signature.
However, until now it has not been possible to utilize Schnorr in Bitcoin. Another type of signature scheme, Elliptic Curve Digital Signature Algorithm (ECDSA), is baked into the Bitcoin protocol, and changing that would require a hard fork.
That's where Segregated Witness comes in.
With Segregated Witness, all signature data is moved to a separate part of the transaction: the witness, which is not embedded in the “old” Bitcoin protocol. And thanks to script versioning, almost any rule applied in the witness can be changed through a soft fork. Including the type of signature scheme used.
Both Roger Ver and Andrew Stone seem to be anti-segwit for inexplicable reasons. BU supporters seem to think SegWit is favorable, but only as a hard fork - an argument that I don't think is smart and is quite frankly disingenuous. If anything it seems they simply object to it because Core developers designed and implemented it (the responsible way IMO, as a soft-fork).
Why does Andrew Stone largely dismiss the desire for segwit integration into BU? He claims there's too much "technical debt" w/ SegWit @~ 52 minutes into the interview; example given is a ridiculous cop-out. He also claims "if we're going to do a hardfork anyway, why not take SegWit and make it into a hard fork?" as an argument against Core's implementation, yet his software is the software that is actually set to hard-fork, yet he hasn't ported SegWit in to activate when/if his miner-controlled blocksize hard-fork activates! At least he encourages everyone that's not a sock-puppet and can demonstrate that to register on their forum to propose any changes they want to BU which will be voted up or down by the members. Whether your membership will be rejected outright if you have voiced any pro-core opinions or not is an open question.
On to Ver. Why has Roger Ver chosen a demonstrably anti-segwit approach (despite his politically safe, but practically meaningless rhetoric about not blocking SegWit), even though I have yet to find a single retail/wallet/actual-nonminer-btc-adoption-entity that is against the SegWit approach? It enables so many pro-fungibility features in the future that it seems to me to be a no-brainer to activate. My suspicion about Ver is that he's hedging his bets with massive investments in altcoins with better anonymity features anyway, such as ZCash, Monero and DASH, so even if Bitcoin fails or falters due to fungibility issues, his anon-coin investments will offset his BTC losses.
What do you think?
submitted by burnitdownforwhat to Bitcoin [link] [comments]

Quantum Computing and the Difference Between Public Keys and Bitcoin Addresses

I had a conversation recently with someone that made me wonder if there is a lot of confusion about the relation between public keys and Bitcoin addresses, so I thought I'd make a brief post explaining for interested parties this difference.
As most people know public key cryptography, particularly the Elliptic Curve Digital Signature Algorithm (ECDSA), is important to Bitcoin. In this cryptographic scheme an asymmetric key-pair is generated, a public key, which can be shared with anyone, and a private key which should be known by only you. In Bitcoin, with ECDSA, your private key is used to sign a transaction, confirming that it was in fact you, the rightful owner of a given UTXO or some set of UTXOs who is sending them off to some other address. Any arbitrary person (miners, people running validation nodes or SPV wallets) can confirm that it was you who sent the transaction because included in the transaction is your public key. Your public key will revert the cryptographic transformation done by your private key, thus allowing people to verify that the transaction being signed is in fact the one under consideration, and that it originates from the holder of the private key corresponding to this public key.
Missing from my explanation above is how transaction validators know the public key for a given Bitcoin address. A natural assumption might be that Bitcoin addresses are ECDSA public keys. This assumption is natural, but incorrect. Bitcoin Address are hashes of public keys (pubkey hashes). With this scheme, your public key is only ever exposed when you transact with Bitcoin. Validators verify that a public key you provide for a transaction is correct for your Bitcoin address by hashing that public key and making sure that the hash of the public key is equivalent with the pubkey hash.
Some corollaries of using the pubkey hash instead of the public key for Bitcoin Addresses are:
Expanding on the second point, suppose tomorrow a Quantum Computer came out with enough qubits to solve the currently intractable discrete logarithm problem of determining an ECDSA private key from a public key. If you had been following the Bitcoin best practice of not re-using addresses when you transact your Bitcoin, then the only window of opportunity for a thief would be the time before your Bitcoin gets added to a block when you were transacting with it.
Unlike public key cryptography, cryptographic hashes themselves are not particularly vulnerable to the advent of quantum computing.
submitted by Zectro to btc [link] [comments]

Dogecoin giveaway - Comment here to receive 100 doge. Also, AMA about cryptocurrency.

Once you get tipped, click the +accept link that the bot PMs you. You can then see your balance and recent dogetipbot transaction history with +history
I will also be answering any questions you have. I'm a moderator on /dogecoin and have been studying cryptocurrency for almost 3 years. Here's a glossary of terms you may not know which may help spark some questions if you don't know what to ask:
Hash: The result of an algorithm that takes any input data of arbitrary size and produces a fixed size output. It is impossible to discover the input data based on the resulting hash.
Private keys, public keys and addresses (privkey, pubkey, addr): Put simply, a private key is just a number. A really really big number. There are 2 ^ 160 possible private keys, each is a 256 bit integer in binary. Using the ECDSA your private keys correspond to a public key. And a hash of your public key is your wallet address.
Wallet: Software which generates and stores your keys and addresses.
Transaction (tx): A piece of data that contains where coins are coming from (inputs) and where they are going to (outputs). To be valid, your wallet software must sign the transaction with the private keys of all the inputs, this is how ownership of coins is proven.
Block: A data structure used by cryptocurrency networks which contains transactions.
Blockchain: The collection of blocks in a cryptocurrency network. Each new block contains the hash of the previous block, this is required for it to be valid. In this way, blocks are chained together, each one depends on the previous one to be valid.
Proof of work (POW): The process of hashing random data to discover a hash value that is lower than a predetermined number, that number is the "difficulty".
Mining: Miners collect all the transactions on the network and assemble them into a block. Using POW, miners insert random data (called a nonce, aka number used once) into the block and hash the block. When they find a hash value below the target difficulty, the block is considered valid by the rules of the network and miners broadcast the block to the network. The transactions in the block now have 1 confirmation. Miners are also allowed to claim a block reward (sort of a finder's fee) for their work. This incentivizes miners for their work. Mining is what secures the network from attack. If you have 51% of the entire network's mining power, then you can block transactions or even reverse transactions, so it is important that mining remains as decentralized as possible.
Node: A computer that is running cryptocurrency software which generates, validates and relays transactions and blocks. They download and validate the full blockchain. Nodes can also be wallets, this software is often called "core". The network of nodes IS the cryptocurrency network, they are what make the whole thing work. The node software also contains a friendly JSON API which can be used to perform many functions, such as looking up a transaction in the blockchain history.
submitted by peoplma to RedditDayOf [link] [comments]

Full English Transcript of Gavin's AMA on 8BTC, April 21st. (Part 1)

Part 2
Part 3
Raw transcript on Google Docs (English+Chinese): https://docs.google.com/document/d/1p3DWMfeGHBL6pk4Hu0efgQWGsUAdFNK6zLHubn5chJo/edit?usp=sharing
Translators/Organizers: emusher, kcbitcoin, nextblast, pangcong, Red Li, WangXiaoMeng. (Ranked in alphabetical order)
1.crypto888
Q: What is your relationship with Blockstream now? Are you in a Cold War? Your evaluation on BS was pretty high “If this amazing team offers you a job, you should take it,” tweeted Gavin Andresen, Chief Scientist, Bitcoin Foundation.” But now, what’s your opinion on BS?
A: I think everybody at Blockstream wants Bitcoin to succeed, and I respect and appreciate great work being done for Bitcoin by people at Blockstream.
We strongly disagree on priorities and timing; I think the risks of increasing the block size limit right away are very small. I see evidence of people and businesses getting frustrated by the limit and choosing to use something else (like Ethereum or a private blockchain); it is impossible to know for certain how dangerous that is for Bitcoin, but I believe it is more danger than the very small risk of simply increasing or eliminating the block size limit.
2. Ma_Ya
Q: 1) Why insist on hard fork at only 75%? You once explained that it is possible to be controlled by 5% if we set the threshold at 95%. I agree, but there should be some balance here. 75% means a high risk in splitting, isn’t it too aggressive? Is it better if we set it to 90%?
A: 1)The experience of the last two consensus changes is that miners very quickly switch once consensus reaches 75% -- the last soft fork went from 75% support to well over 95% support in less than one week. So I’m very confident that miners will all upgrade once the 75% threshold is reached, and BIP109 gives them 28 days to do so. No miner wants to create blocks that will not be accepted by the network.
Q: 2) How to solve the potentially very large blocks problem Classic roadmap may cause, and furthur causing the centralization of nodes in the future?
A: 2)Andreas Antonopoulos gave a great talk recently about how people repeatedly predicted that the Internet would fail to scale. Smart engineers proved them wrong again and again, and are still busy proving them wrong today (which is why I enjoy streaming video over my internet connection just about every night).
I began my career working on 3D graphics software, and saw how quickly we went from being able to draw very simple scenes to today’s technology that is able to render hundreds of millions of triangles per second.
Processing financial transactions is much easier than simulating reality. Bitcoin can easily scale to handle thousands of transactions per second, even on existing computers and internet connections, and even without the software optimizations that are already planned.
Q: 3) Why do you not support the proposal of RBF by Satoshi, and even plan to remove it in Classic completely?
A: 3) Replace-by-fee should be supported by most of the wallets people are using before it is supported by the network. Implementing replace-by-fee is very hard for a wallet, especially multi-signature and hardware wallets that might not be connected to the network all of the time.
When lots of wallet developers start saying that replace-by-fee is a great idea, then supporting it at the network level makes sense. Not before.
Q: 4) . Your opinion on soft fork SegWit, sidechain, lighnting network. Are you for or against, please give brief reasons. Thanks.
A: 4) The best way to be successful is to let people try lots of different things. Many of them won’t be successful, but that is not a problem as long as some of them are successful.
I think segregated witness is a great idea. It would be a little bit simpler as a hard fork instead of a soft fork (it would be better to put the merkle root for the witness data into the merkle root in the block header instead of putting it inside a transaction), but overall the design is good.
I think sidechains are a good idea, but the main problem is finding a good way to keep them secure. I think the best uses of sidechains will be to publish “write-only” public information involving bitcoin. For example, I would like to see a Bitcoin exchange experiment with putting all bids and asks and trades on a sidechain that they secure themselves, so their customers can verify that their orders are being carried out faithfully and nobody at the exchanges is “front-running” them.
Q: 5) Can you share your latest opinion on Brainwallet? It is hard for new users to use long and complex secure passphrase, but is it a good tool if it solves this problem?
A: 5) We are very, very bad at creating long and complex passphrases that are random enough to be secure. And we are very good at forgetting things.
We are much better at keeping physical items secure, so I am much more excited about hardware wallets and paper wallets than I am about brain wallets. I don’t trust myself to keep any bitcoin in a brain wallet, and do not recommend them for anybody else, either.
3. BiTeCui
Q: Gavin, do you have bitcoins now? What is your major job in MIT? Has FBI ever investigated on you? When do you think SHA256 might be outdated, it seems like it has been a bit unsafe?
A: Yes, a majority of my own person wealth is still in bitcoins -- more than a financial advisor would say is wise.
My job at MIT is to make Bitcoin better, in whatever way I think best. That is the same major job I had at the Bitcoin Foundation. Sometimes I think the best way to make Bitcoin better is to write some code, sometimes to write a blog post about what I see happening in the Bitcoin world, and sometimes to travel and speak to people.
The FBI (or any other law enforcement agency) has never investigated me, as far as I know. The closest thing to an investigation was an afternoon I spent at the Securities and Exchange Commission in Washington, DC. They were interested in how I and the other Bitcoin developers created the software and how much control we have over whether or not people choose to run the software that we create.
“Safe or unsafe” is not the way to think about cryptographic algorithms like SHA256. They do not suddenly go from being 100% secure for everything to completely insecure for everything. I think SHA256 will be safe enough to use in the all ways that Bitcoin is using it for at least ten years, and will be good enough to be used as the proof-of-work algorithm forever.
It is much more likely that ECDSA, the signature algorithm Bitcoin is using today, will start to become less safe in the next ten or twenty years, but developer are already working on replacements (like Schnorr signatures).
4. SanPangHenBang
Q: It’s a pleasure to meet you. I only have one question. Which company are you serving? or where do you get your salary?
A: The Media Lab at MIT (Massachusetts Institute of Technology) pays my salary; I don’t receive regular payments from anybody else.
I have received small amounts of stock options in exchange for being a techical advisor to several Bitcoin companies (Coinbase, BitPay, Bloq, Xapo, Digital Currency Group, CoinLab, TruCoin, Chain) which might be worth money some day if one or more of those companies do very well. I make it very clear to these companies that my priority is to make Bitcoin better, and my goal in being an advisor to them is to learn more about the problems they face as they try to bring Bitcoin to more of their customers.
And I am sometimes (once or twice a year) paid to speak at events.
5.SaTuoXi
Q: Would you mind share your opinion on lightning network? Is it complicated to implement? Does it need hard fork?
A: Lightning does not need a hard fork.
It is not too hard to implement at the Bitcoin protocol level, but it is much more complicated to create a wallet capable of handling Lightning network payments properly.
I think Lightning is very exciting for new kinds of payments (like machine-to-machine payments that might happen hundreds of times per minute), but I am skeptical that it will be used for the kinds of payments that are common on the Bitcoin network today, because they will be more complicated both for wallet software and for people to understand.
6. pangcong
Q: 1) There has been a lot of conferences related to blocksize limit. The two took place in HongKong in Decemeber of 2015 and Feberary of 2016 are the most important ones. Despite much opposition, it is undeniable that these two meetings basically determines the current status of Bitcoin. However, as the one of the original founders of Bitcoin, why did you choose to not attend these meetings? If you have ever attended and opposed gmax’s Core roadmap (SegWit Priority) in one of the meetings, we may be in a better situation now, and the 2M hard fork might have already begun. Can you explain your absence in the two meetings? Do you think the results of both meetings are orchestrated by blockstream?
A: 1) I attended the first scaling conference in Montreal in September of 2015, and had hoped that a compromise had been reached.
A few weeks after that conference, it was clear to me that whatever compromise had been reached was not going to happen, so it seemed pointless to travel all the way to Hong Kong in December for more discussion when all of the issues had been discussed repeatedly since February of 2015.
The February 2016 Hong Kong meeting I could not attend because I was invited only a short time before it happened and I had already planned a vacation with my family and grandparents.
I think all of those conferences were orchestrated mainly by people who do not think raising the block size limit is a high priority, and who want to see what problems happen as we run into the limit.
Q: 2) We have already known that gmax tries to limit the block size so as to get investment for his company. However, it is obvious that overthrowing Core is hard in the short term. What if Core continues to dominate the development of Bitcoin? Is it possible that blockstream core will never raise the blocksize limit because of their company interests?
A: 2) I don’t think investment for his company is Greg’s motivation-- I think he honestly believes that a solution like lightning is better technically.
He may be right, but I think it would be better if he considered that he might also be wrong, and allowed other solutions to be tried at the same time.
Blockstream is a funny company, with very strong-willed people that have different opinions. It is possible they will never come to an agreement on how to raise the blocksize limit.
7. HeiYanZhu
Q: I would like to ask your opinion on the current situation. It’s been two years, but a simple 2MB hard fork could not even be done. In Bitcoin land, two years are incredibly long. Isn’t this enough to believe this whole thing is a conspiracy?
A: I don’t think it is a conspiracy, I think it is an honest difference of opinion on what is most important to do first, and a difference in opinion on risks and benefits of doing different things.
Q: How can a multi-billion network with millions of users and investors be choked by a handful of people? How can this be called decentrilized and open-source software anymore? It is so hard to get a simple 2MB hard fork, but SegWig and Lighting Network with thousands of lines of code change can be pushed through so fast. Is this normal? It is what you do to define if you are a good man, not what you say.
A: I still believe good engineers will work around whatever unnecessary barriers are put in their way-- but it might take longer, and the results will not be as elegant as I would prefer.
The risk is that people will not be patient and will switch to something else; the recent rapid rise in developer interest and price of Ethereum should be a warning.
Q: The problem now is that everybody knows Classic is better, however, Core team has controlled the mining pools using their powers and polical approaches. This made them controll the vast majority of the hashpower, no matter what others propose. In addition, Chinese miners have little communication with the community, and do not care about the developement of the system. Very few of them knows what is going on in the Bitcoin land. They almost handed over their own power to the mining pool, so as long as Core controls the pools, Core controls the whole Bitcoin, no matter how good your Classic is. Under this circumstance, what is your plan?
A: Encourage alternatives to Core. If they work better (if they are faster or do more) then Core will either be replaced or will have to become better itself. I am happy to see innovations happening in projects like Bitcoin Unlimited, for example. And just this week I see that Matt Corallo will be working on bringing an optmized protocol for relaying blocks into Core; perhaps that was the plan all along, or perhaps the “extreme thin blocks” work in Bitcoin Unlimited is making that a higher priority. In any case, competition is healthy.
Q: From this scaling debate, do you think there is a huge problem with Bitcoin development? Does there exsit development centrilization? Does this situation need improvment? For example, estabilish a fund from Bitcoin as a fundation. It can be used for hiring developers and maintainers, so that we can solve the development issue once and for all.
A: I think the Core project spends too much time thinking about small probability technical risks (like “rogue miners” who create hard-to-validate blocks or try to send invalid blocks to SPV wallets) and not enough time thinking about much larger non-technical risks.
And I think the Core project suffers from the common open source software problem of “developers developing for developers.” The projects that get worked on are the technically interesting projects-- exciting new features (like the lightning network), and not improving the basic old features (like improving network performance or doing more code review and testing).
I think the situation is improving, with businesses investing more in development (but perhaps not in the Core project, because the culture of that project has become much less focused on short-term business needs and more on long-term exciting new features).
I am skeptical that crowd-funding software development can work well; if I look at other successful open source software projects, they are usually funded by companies, not individuals.
8.jb9802
You are one of the most-repected person in Bitcoin world, I won’t miss the chance to ask some questions. First of all, I am a Classic supporter. I strongly believe that on-chain transcations should not be restrained artificially. Even if there are transcations that are willing to go through Lighting Network in the future, it should be because of a free market, not because of artificial restrication. Here are some of my questions:
Q: 1) For the past two years, you’ve been proposing to Core to scale Bitcoin. In the early days of the discussion, Core devs did agree that the blocksize should be raised. What do you think is the major reason for Core to stall scaling. Does there exist conflict of interest between Blockstream and scaling?
A: 1) There might be unconscious bias, but I think there is just a difference of opinion on priorities and timing.
Q: 2) One of the reason for the Chinese to refuse Classic is that Classic dev team is not technically capable enough for future Bitcoin development. I also noticed that Classic does have a less frequent code release compared to Core. In your opinion, is there any solution to these problems? Have you ever thought to invite capable Chinese programers to join Classic dev team?
A: 2) The great thing about open source software is if you don’t think the development team is good enough (or if you think they are working on the wrong things) you can take the software and hire a better team to improve it.
Classic is a simple 2MB patch on top of Core, so it is intentional that there are not a lot of releases of Classic.
The priority for Classic right now is to do things that make working on Classic better for developers than working on Core, with the goal of attracting more developers. You can expect to see some results in the next month or two.
I invite capable programmers from anywhere, including China, to help any of the teams working on open source Bitcoin software, whether that is Classic or Core or Unlimited or bitcore or btcd or ckpool or p2pool or bitcoinj.
Q: 3) Another reason for some of the Chinese not supporting Classic is that bigger blocks are more vulnerable to spam attacks. (However, I do think that smaller blocks are more vlunerable to spam attack, because smaller amount of money is needed to choke the blockchain.) What’s our opinion on this?
A: 3) The best response to a transaction spam attack is for the network to reject transactions that pay too little fees but to simply absorb any “spam” that is paying as much fees as regular transactions.
The goal for a transaction spammer is to disrupt the network; if there is room for extra transactions in blocks, then the network can just accept the spam (“thank you for the extra fees!”) and continue as if nothing out of the ordinary happened.
Nothing annoys a spammer more than a network that just absorbs the extra transactions with no harmful effects.
Q: 4) According to your understanding on lighting network and sidechains,if most Bitcoin transactions goes throught lighting network or sidechains, it possible that the fees paid on the these network cannot reach the main-chain miners, which leaves miners starving. If yes, how much percent do you think will be given to miners.
A: 4) I don’t know, it will depend on how often lightning network channels are opened and closed, and that depends on how people choose to use lightning.
Moving transactions off the main chain and on to the lightning network should mean less fees for miners, more for lightning network hubs. Hopefully it will also mean lower fees for users, which will make Bitcoin more popular, drive up the price, and make up for the lower transaction fees paid to miners.
Q: 5) The concept of lighting network and sidechains have been out of one or two years already, when do you think they will be fully deployed.
A: 5) Sidechains are already “fully deployed” (unless you mean the version of sidechains that doesn’t rely on some trusted gateways to move bitcoin on and off the sidechain, which won’t be fully deployed for at least a couple of years). I haven’t seen any reports of how successful they have been.
I think Lightning will take longer than people estimate. Seven months ago Adam Back said that the lightning network might be ready “as soon as six months from now” … but I would be surprised if there was a robust, ready-for-everybody-to-use lightning-capable wallet before 2018.
Q: 6)Regarding the hard fork, Core team has assumed that it will cause a chain-split. (Chinese miners are very intimitated by this assumption, I think this is the major reason why most of the Chinese mining pools are not switching to Classic). Do you think Bitcoin will have a chain-split?
A: 6) No, there will not be a chain split. I have not talked to a single mining pool operator, miner, exchange, or major bitcoin business who would be willing to mine a minority branch of the chain or accept bitcoins from a minority branch of the main chain.
Q: 7) From your point of view, do you think there is more Classic supporters or Core supporters in the U.S.?
A: 7) All of the online opinion pools that have been done show that a majority of people worldwide support raising the block size limit.
9. btcc123
Q: Which is more in line with the Satoshi’s original roadmap, Bitcoin Classic or Bitcoin Core? How to make mining pools support and adopt Bitcoin Classic?
A: Bitcoin Classic is more in line with Satoshi’s original roadmap.
We can’t make the mining pools do anything they don’t want to do, but they are run by smart people who will do what they think is best for their businesses and Bitcoin.
10.KuHaiBian
Q: Do you have any solution for mining centralization? What do you think about the hard fork of changing mining algorithms?
A: I have a lot of thoughts on mining centralization; it would probably take ten or twenty pages to write them all down.
I am much less worried about mining centralization than most of the other developers, because Satoshi designed Bitcoin so miners make the most profit when they do what is best for Bitcoin. I have also seen how quickly mining pools come and go; people were worried that the DeepBit mining pool would become too big, then it was GHash.io…
And if a centralized mining pool does become too big and does something bad, the simplest solution is for businesses or people to get together and create or fund a competitor. Some of the big Bitcoin exchanges have been seriously considering doing exactly that to support raising the block size limit, and that is exactly the way the system is supposed to work-- if you don’t like what the miners are doing, then compete with them!
I think changing the mining algorithm is a complicated solution to a simple problem, and is not necessary.
11. ChaLi
Q: Last time you came to China, you said you want to "make a different". I know that in USA the opposition political party often hold this concept, in order to prevent the other party being totally dominant. Bitcoin is born with a deep "make a different" nature inside. But in Chinese culture, it is often interpreted as split “just for the sake of splitting”, can you speak your mind on what is your meaning of "make a different"?
A: I started my career in Silicon Valley, where there is a lot of competition but also a lot of cooperation. The most successful companies find a way to be different than their competitors; it is not a coincidence that perhaps the most successful company in the world (Apple Computer) had the slogan “think different.”
As Bitcoin gets bigger (and I think we all agree we want Bitcoin to get bigger!) it is natural for it to split and specialize; we have already seen that happening, with lots of choices for different wallets, different exchanges, different mining chips, different mining pool software.
12. bluestar
Q: 1) The development of XT and Classic confirmed my thoughts that it is nearly impossible to use a new version of bitcoin to replace the current bitcoin Core controlled by Blockstream. I think we will have to live with the power of Blockstream for a sufficient long time. It means we will see the deployment of SegWit and Lighting network. If it really comes to that point, what will you do? Will you also leave like Mike Hearn?
A: 1) With the development of Blockchain, bitcoin will grow bigger and bigger without any doubts, And also there will be more and more companies related to the bitcoin network. When it comes to money, there will be a lot of fights between these companies. Is it possible to form some kind of committee to avoid harmful fights between these companies and also the situation that a single company controlling the direction of the bitcoin development? Is there any one doing this kind of job right now?
Q: 2) My final question would be, do you really think it is possible that we can have a decentralized currency? Learning from the history, it seems like every thing will become centralized as long as it involves human. Do you have any picture for a decentralized currency or even a society? Thanks.
A: 2) I think you might be surprised at what most people are running a year or three from now. Perhaps it will be a future version of Bitcoin Core, but I think there is a very good chance another project will be more successful.
I remember when “everybody” was running Internet Explorer or Firefox, and people thought Google was crazy to think that Chrome would ever be a popular web browser. It took four years for Chrome to become the most popular web browser.
In any case, I plan on working on Bitcoin related projects for at least another few years. Eventually it will become boring or I will decide I need to take a couple of years of and think about what I want to do next.
As for fights between companies: there are always fights between companies, in every technology. There are organizations like the IETF (Internet Engineering Task Force) that try to create committees so engineers at companies can spend more time cooperating and less time fighting; I’m told by people who participate in IETF meetings that they are usually helpful and create useful standards more often than not.
Finally, yes, I do think we can have a “decentralized-enough” currency. A currency that might be controlled at particular times by a small set of people or companies, but that gives everybody else the ability to take control if those people or businesses misbehave.
13. satoshi
Hi Gavin, I have some questions:
Q: 1) I noticed there are some new names added to the classic team list. Most people here only know you and Jeff. Can you briefly introduce some others to the Chinese community?
A: 1)
Tom Zander has been acting as lead developer, and is an experienced C++ developer who worked previously on the Qt and Debian open source projects.
Pedro Pinheiro is on loan from Blockchain.info, and has mostly worked on continuous integration and testing for Classic.
Jon Rumion joined recently, and has been working on things that will make life for developers more pleasant (I don’t want to be more specific, I don’t want to announce things before they are finished in case they don’t work out).
Jeff has been very busy starting up Bloq, so he hasn’t been very active with Classic recently. I’ve also been very busy traveling (Barbados, Idaho, London and a very quick trip to Beijing) so haven’t been writing much code recently.
Q: 2) if bitcoin classic succeeded (>75% threshold), what role would you play in the team after the 2MB upgrade finished, as a leader, a code contributor, a consultant, or something else?
A: 2)Contributor and consultant-- I am trying not to be leader of any software project right now, I want to leave that to other people who are better at managing and scheduling and recruiting and all of the other things that need to be done to lead a software project.
Q: 3) if bitcoin classic end up failed to achieve mainstream adoption (<75% 2018), will you continue the endeavor of encouraging on-chain scaling and garden-style growth of bitcoin?
A: 3) Yes. If BIP109 does not happen, I will still be pushing to get a good on-chain solution to happen as soon as possible.
Q: 4) Have you encountered any threat in your life, because people would think you obviously have many bitcoins, like what happened to Hal Finney (RIP), or because some people have different ideas about what bitcoin's future should be?
A: 4) No, I don’t think I have received any death threats. It upsets me that other people have.
Somebody did threaten to release my and my wife’s social security numbers and other identity information if I did not pay them some bitcoins a couple of years ago. I didn’t pay, they did release our information, and that has been a little inconvenient at times.
Q: 5) Roger Ver (Bitcoin Jesus) said bitcoin would worth thousands of dollars. Do you have similar thoughts? If not, what is your opinion on bitcoin price in future?
A: 5) I learned long ago to give up trying to predict the price of stocks, currencies, or Bitcoin. I think the price of Bitcoin will be higher in ten years, but I might be wrong.
Q: 6) You've been to China. What's your impression about the country, people, and the culture here? Thank you!
A: 6) I had a very quick trip to Beijing a few weeks ago-- not nearly long enough to get a good impression of the country or the culture.
I had just enough time to walk around a little bit one morning, past the Forbidden City and walk around Tianmen Square. There are a LOT of people in China, I think the line to go into the Chairman Mao Memorial Hall was the longest I have ever seen!
Beijing reminded me a little bit of London, with an interesting mix of the very old with the very new. The next time I am in China I hope I can spend at least a few weeks and see much more of the country; I like to be in a place long enough so that I really can start to understand the people and cultures.
14. Pussinboots
Q: Dear Gavin, How could I contact you, we have an excellent team and good plans. please confirm your linkedin.
A: Best contact for me is [email protected] : but I get lots of email, please excuse me if your messages get lost in the flood.
15. satoshi
Q: Gavin, you've been both core and classic code contributor. Are there any major differences between the two teams, concerning code testing (quality control) and the release process of new versions?
A: Testing and release processes are the same; a release candidate is created and tested, and once sufficiently tested, a final release is created, cryptographically signed by several developers, and then made available for download.
The development process for Classic will be a little bit different, with a ‘develop’ branch where code will be pulled more quickly and then either fixed or reverted based on how testing goes. The goal is to create a more developer-friendly process, with pull requests either accepted or rejected fairly quickly.
16. tan90d
I am a bitcoin enthusiast and a coin holder. I thank you for your great contribution to bitcoin. Please allow me to state some of my views before asking:
  1. I'm on board with classic
  2. I support the vision to make bitcoin a powerful currency that could compete with Visa
  3. I support segwit, so I'll endorse whichever version of bitcoin implementation that upgrades to segwit, regardless of block size.
  4. I disagree with those who argue bitcoin main blockchain should be a settlement network with small blocks. My view is that on the main chain btc should function properly as a currency, as well as a network for settlement.
  5. I'm against the deployment of LN on top of small block sized blockchain. Rather, it should be built on a chain with bigger blocks.
  6. I also won’t agree with the deployment of many sidechains on top of small size block chain. Rather, those sidechains should be on chain with bigger blocks.
With that said, below are my questions:
Q: 1) If bitcoin is developed following core's vision, and after the 2020 halving which cuts block reward down to 6.125BTC, do you think the block transaction fee at that time will exceed 3BTC?
A: 1) If the block limit is not raised, then no, I don’t think transaction fees will be that high.
Q: 2) If bitcoin is developed following classic's vision, and after the 2020 halving which cuts block reward down to 6.125BTC, do you think the block transaction fee at that time will exceed 3BTC?
A: 2) Yes, the vision is lots of transactions, each paying a very small fee, adding up to a big total for the miners.
Q: 3) If bitcoin is developed following core's vision, do you think POW would fail in future, because the mining industry might be accounted too low value compared with that of the bitcoin total market, so that big miners could threaten btc market and gain profit by shorting?
*The questioner further explained his concern.
Currently, its about ~1.1 billion CNY worth of mining facilities protecting ~42 billion CNY worth (6.5 Billion USD) of bitcoin market. The ratio is ~3%. If bitcoin market cap continues to grow and we adopt layered development plan, the mining portion may decrease, pushing the ratio go even down to <1%, meaning we are using very small money protecting an huge expensive system. For example, in 2020 if bitcoin market cap is ~100 billion CNY, someone may attempt to spend ~1 billion CNY bribe/manipulate miners to attack the network, thus making a great fortune by shorting bitcoin and destroying the ecosystem.
A: 3) Very good question, I have asked that myself. I have asked people if they know if there have been other cases where people destroyed a company or a market to make money by shorting it -- as far as I know, that does not happen. Maybe because it is impossible to take a large short position and remain anonymous, so even if you were successful, you would be arrested for doing whatever you did to destroy the company or market (e.g. blow up a factory to destroy a company, or double-spend fraud to try to destroy Bitcoin).
Q: 4) If bitcoin is developed following classic's vision, will the blocks become too big that kill decentralization?
A: 4) No, if you look at how many transactions the typical Internet connection can support, and how many transactions even a smart phone can validate per second, we can support many more transactions today with the hardware and network connections we have now.
And hardware and network connections are getting faster all the time.
Q: 5) In theory, even if we scale bitcoin with just LN and sidechains, the main chain still needs blocks with size over 100M, in order to process the trading volume matching Visa's network. So does core have any on-chain scaling plan other than 2MB? Or Core does not plan to evolve bitcoin into something capable of challenging visa?
A: 5) Some of the Core developer talk about a “flexcap” solution to the block size limit, but there is no specific proposal.
I think it would be best to eliminate the limit all together. That sounds crazy, but the most successful Internet protocols have no hard upper limits (there is no hard limit to how large a web page may be, for example), and no protocol limit is true to Satoshi’s original design.
Q: 6) If (the majority of) hash rate managed to switch to Classic in 2018, will the bitcoin community witness the deployment of LN in two years (~2018)?
A: 6) The bottleneck with Lightning Network will be wallet support, not support down at the Bitcoin protocol level. So I don’t think the deployment schedule of LN will be affected much whether Classic is adopted or not.
Q: 7) If (majority) hash rate upgraded to blocks with segwit features in 2017 as specified in core's roadmap, would classic propose plans to work on top of that (blocks with segwit)? Or insist developing simplified segwit blocks as described in classic's roadmap?
A: 7) Classic will follow majority hash rate. It doesn’t make sense to do anything else.
Q: 8) If most hash rate is still on core's side before 2018, will you be disappointed with bitcoin, and announce that bitcoin has failed like what Mike did, and sell all your stashed coins at some acceptable price?
A: 8) No-- I have said that I think if the block size limit takes longer to resolve, that is bad for Bitcoin in the short term, but smart engineers will work around whatever road blocks you put in front of them. I see Bitcoin as a long-term project.
Q: 9) If we have most hash rate switched to classic's side before 2018, what do you think will be the fate of Blockstream company?
A: 9) I think Blockstream might lose some employees, but otherwise I don’t think it will matter much. They are still producing interesting technology that might become a successful business.
Q: 10) If we have most hash rate still on core's side before 2018, what do you think will be the fate of Blockstream company?
A: 10) I don’t think Blockstream’s fate depends on whether or not BIP109 is adopted. It depends much more on whether or not they find customers willing to pay for the technology that they are developing.
Q: 11) If we have most hash rate still on core's side before 2018, what do you think will be the fate of companies that support classic, such as Coinbse, bitpay, and Blockchain.info?
A: 11) We have already seen companies like Kraken support alternative currencies (Kraken supports Litecoin and Ether); if there is no on-chain scaling solution accepted by the network, I think we will see more companies “hedging their bets” by supporting other currencies that have a simpler road map for supporting more transactions.
Q: 12) If we have most hash rate switched to classic's side before 2018, will that hinder the development of sidechain tech? What will happen to companies like Rockroot(Rootstock?) ?
A: 12) No, I think the best use of sidechains is for things that might be too risky for the main network (like Rootstock) or are narrowly focused on a small number of Bitcoin users. I don’t think hash rate supporting Classic will have any effect on that.
Q: 13) Between the two versions of bitcoin client, which one is more conducive to mining industry, classic or core?
A: 13) I have been working to make Classic better for the mining industry, but right now they are almost identical so it would be dishonest to say one is significantly better than the other.
17. Alfred
Q: Gavin, can you describe what was in your mind when you first learned bitcoin?
A: I was skeptical that it could actually work! I had to read everything I could about it, and then read the source code before I started to think that maybe it could actually be successful and was not a scam.
submitted by kcbitcoin to btc [link] [comments]

Technical Q&A with Joey

Question: I think it’s great that you pop into the TG from time to time - any significant updates?
Zhaojun’s answer: Going well, we have completed the development of dcrm3.0, dcrm4.0 is being developed.
Question: how is Zhaojun connected to Fusion project?
Joey’s answer: Zhaojun is one of FUSION’s developers!
Question: full time? does he have a profile on linkedin or elsewhere?
Joey’s answer: Yes, Zhaojun works on DCRM full time! Along with other developers, they are all based in Shanghai!
Question: Hey Joey, what can you tell us about the DCRM development, are you involved at all with that?
Joey’s answer: I hope that you guys are aware that Bitcoin, Ethereum and all ERC20 can now be Locked In through DCRM. And yes, I am involved in the development of DCRM. Just not in core development but required components around it We can see more different types of assets in the future. We will provide a list of them upon release! And yes it includes stablecoins BNB indeed is an ERC20 which can already be Locked In
Joey’s comment: We can see more different types of assets in the future. We will provide a list of them upon release! And yes it includes stablecoins. BNB indeed is an ERC20 which can already be Locked In. But also Gemini USD for example; which is ERC20 as well
Question: Will any BTC fork work in first version ? BCH, etc?
Joey’s answer: YES! Any blockchain that uses ECDSA algorithm is confirmed working!
Question: Nice!!! So, the only thing that keeps us away from mainnet releasing is the problem with the hybrid consensus timing, right?
Joey’s answer: I can’t comment on what keeps us from the MainNet.. but I can say that the TimeLock issue that there was before has been RESOLVED!
Joey’s comment: But yes, there is a issue with the Hybrid Consensus at the moment
Question: a new issue?
Joey’s answer: No, no! The TimeLock issue there was before was resolved. Hybrid Consensus is a different thing!
Question: ah ok....the hybrid issue continues to be your target, right?
Joey’s comment: The developers are working hard on resolving the issue, and a lot of progress has already been made!
Question: So do you have any doubts that the problem will be solved? Or are you confident that it will be solved n just a matter of time? Great news thus far
Joey’s answer: Of course I am more than confident, everything in life is a matter of time. Rome wasn’t built in one day as well!
Question: Are you proposing to change the hybrid consensus mechanism. Will it still be POS/POW?
Joey’s answer: No we are not, the time is working on resolving the issue.
Question: Completely agree but I think a few people were fearful that the team was met with some unsolvable issues! This little update should be enough to convince all those who were scared to finish buying their nodes. Thanks for the communication tonight
Joey’s answer: There really is no need to panic at all. Let’s be realistic, this is crypto.. this is Telegram.. this is a chat with 10.000 people and barely anyone knows each other.. it’s a scary world.. but rest assured. FUSION is here to stay and deliver what it promised from Day 1. This is cryptofinance!
Question: It was hinted that the nodes would be optimized for cpu . Can you share a little more on this topic? Would a gpu miner be overkill and earn no more than a well built cpu node? This is huge for me looking forward to hearing back!
Joey’s answer: A GPU mining rig would be overkill right now, Zhaojun has already said that VPS would be good to set up a node.
Question: I recall. Are there any specifics for the VPs requirements so I can begin to compare quotes to host? Thank you!
Joey’s answer: Just a regular VPS (usually 2 cores, 4GB RAM) should be good as a VPS
Question: Can you run multiple nodes on one vps thanks
Joey’s answer: I think you have asked that question before! Yes theoretically you could, but I wouldn’t recommend it!
Question: I hope others are hear to ask their tech questions before you go. Most of mine are answered. I’m def curious about the existence of a Fusion treasury, our standing after the lawyer meetings in manhattan to discuss utility vs security, and if FSn has a lawyer to enforce any patent infringements
Joey’s answer: Well, the current Hybrid Consensus issue has to do with the timing where one is going faster than the other. Which makes it that they don’t end together.
Question: Have you selected a second stable coin to include in the dcrm? Perhaps usdc or gusd?
Joey’s answer: GUSD is already working!
Question: Has Andre reviewed any of the code as yet. Thanks joey ?
Joey’s answer: I have no idea, haven’t seen any code review from him!
Question: Is 3k tps your maximum future target or are you going to use side chain or shardining to support fast tps for exchanges?
Joey’s answer: We will improve TPS overtime yes!
Question: What about the website?
Joey’s answer: Yes, the website is currently being worked at. It’s really getting nice! Just as you would expect
Question: Some ppl said it is delaying too much....any particular reason? Can you say when it will be on line?
Joey’s answer: There is no particular reason, nor is there a delay. Everything is going according plan!
Question: Joey how close are you in developing native FSN token wallet. Thanks my last question appreciate it.
Joey’s answer: This is something you should expect to see upon MainNet!
Question: Any dapp?
Joey’s answer: The word dApp is a veeery big word. There is a lot of work in progress, already yes.
Joey’s comment: Stay posted the upcoming 24hours.. There might be a small suprise
Joey’s comment: DCRM 3.0 implemented the DCRM algorithm on a Peer to Peer network. It offers RPC API calls and cross-chain support for both Bitcoin and Ethereum. Now, it's time for DCRM 4.0 to rise. DCRM 4.0 will be implemented on the actual chain including Lock In and Lock Out on the chain. And since we save the best for last; 4.0 offers Multi-Currency Smart Contracts!
submitted by Yolmer13 to FusionFoundation [link] [comments]

History Lesson for new VIA Viacoin Investors

Viacoin is an open source cryptocurrency project, based on the Bitcoin blockchain. Publicly introduced on the crypto market in mid 2014, Viacoin integrates decentralized asset transaction on the blockchain, reaching speeds that have never seen before on cryptocurrencies. This Scrypt based, Proof of Work coin was created to try contrast Bitcoin’s structural problems, mainly the congested blockchain delays that inhibit microtransaction as this currency transitions from digital money to a gold-like, mean of solid value storage. Bitcoin Core developers Peter Todd and Btc have been working on this currency and ameliorated it until they was able to reach a lightning fast speed of 24 second per block. These incredible speeds are just one of the features that come with the implementation of Lightning Network, and and make Bitcoin slow transactions a thing of the past. To achieve such a dramatic improvement in performance, the developers modified Viacoin so that its OP_RETURN has been extended to 80 bytes, reducing tx and bloat sizes, overcoming multi signature hacks; the integration of ECDSA optimized C library allowed this coin to reach significant speedup for raw signature validation, making it perform up to 5 times better. This will mean easy adoption by merchants and vendors, which won’t have to worry anymore with long times between the payment and its approval. Todd role as Chief Scientist and Advisor has been proven the right choice for this coin, thanks to his focus on Tree Chains, a ground breaking feature that will fix the main problems revolving around Bitcoin, such as scalability issues and the troubles for the Viacoin miners to keep a reputation on the blockchain in a decentralized mining environment. Thanks to Todd’s expertise in sidechains, the future of this crypto currency will see the implementation of an alternative blockchain that is not linear. According to the developer, the chains are too unregulated when it comes to trying to establish a strong connection between the operations happening on one chain and what happens elsewhere. Merged mining, scalability and safety are at risk and tackling these problems is mandatory in order to create a new, disruptive crypto technology. Tree Chains are going to be the basis for a broader use and a series of protocols that are going to allow users and developers to use Viacoin’s blockchain not just to mine and store coins, but just like other new crypto currencies to allow the creation of secure, decentralized consensus systems living on the blockchain The commander role on this BIP9 compatible coin’s development team has now been taken by a programmer from the Netherlands called Romano, which has a great fan base in the cryptocurrency community thanks to his progressive views on the future of the world of cryptos. He’s in strong favor of SegWit, and considers soft forks on the chain not to be a problem but an opportunity: according to him it will provide an easy method to enable scripting upgrades and the implementation of other features that the market has been looking for, such as peer to peer layers for compact block relay. Segregation Witness allows increased capacity, ends transactions malleability, makes scripting upgradeable, and reduces UTXO set. Because of these reasons, Viacoin Core 0.13 is already SegWit ready and is awaiting for signaling.
Together with implementation of SegWit, Romano has recently been working on finalizing the implementation of merged mining, something that has never been done with altcoins. Merged mining allows users to mine more than one block chain at the same time, this means that every hash the miner does contributes to the total hash rate of all currencies, and as a result they are all more secure. This release pre-announcement resulted in a market spike, showing how interested the market is in the inclusion of these features in the coin core and blockchain. The developer has been introducing several of these features, ranging from a Hierarchical Deterministic key (HD key) generation that allows all Viacoin users to backup their wallets, to a compact block relay, which decreases block propagation times on the peer to peer network; this creates a healthier network and a better baseline relay security margin. Viacoin’s support for relative locktime allows users and miners to time-lock a transaction, this means that a new transaction will be prevented until a relative time change is achieved with a new OP code, OP_CHECKSEQUENCEVERITY, which allows the execution of a script based on the age of the amount that is being spent. Support for Child-Pays-For-Parent procedures in Viacoin has been successfully enabled, CPFP will alleviate the problem of transactions that stuck for a long period in the unconfirmed limbo, either because of network bottlenecks or lack of funds to pay the fee. Thanks to this method, an algorithm will selects transactions based on federate inclusive unconfirmed ancestor transaction; this means that a low fee transaction will be more likely to get picked up by miners if another transaction with an higher fee that speeds its output gets relayed. Several optimizations have been implemented in the blockchain to allow its scaling to proceed freely, ranging from pruning of the chain itsel to save disk space, to optimizing memory use thanks to mempool transaction filtering. UTXO cache has also been optimization, further allowing for significant faster transaction times. Anonymity of transaction has been ameliorated, thanks to increased TOR support by the development team. This feature will help keep this crypto currency secure and the identity of who works on it safe; this has been proven essential, especially considering how Viacoin’s future is right now focused on segwit and lightning network . Onion technology used in TOR has also been included in the routing of transactions, rapid payments and instant transaction on bi directional payment channels in total anonymity. Payments Viacoin’s anonymity is one of the main items of this year’s roadmap, and by the end of 2017 we’ll be able to see Viacoin’s latest secure payment technology, called Styx, implemented on its blockchain. This unlinkable anonymous atomic payment hub combines off-the-blockchain cryptographic computations, thanks to Viacoin’s scriptin functionalities, and makes use of security RSA assumptions, ROM and Elliptic Curve digital signature Algorithm; this will allow participants to make fast, anonymous transfer funds with zero knowledge contingent payment proof. Wallets already offer strong privacy, thanks to transactions being broadcasted once only; this increases anonymity, since it can’t be used to link IPs and TXs. In the future of this coin we’ll also see hardware wallets support reaching 100%, with Trezor and Nano ledger support. These small, key-chain devices connect to the user’s computer to store their private keys and sign transactions in a safe environment. Including Viacoin in these wallets is a smart move, because they are targeted towards people that are outside of hardcore cryptocurrency users circle and guarantees exposure to this currency. The more casual users hear of this coin, the faster they’re going to adopt it, being sure of it’s safety and reliability. In last October, Viacoin price has seen a strong decline, probably linked to one big online retailer building a decentralized crypto stock exchange based on the Counterparty protocol. As usual with crypto currencties, it’s easy to misunderstand the market fluctuations and assume that a temporary underperforming coin is a sign of lack of strength. The change in the development team certainly helped with Viacoin losing value, but by watching the coin graphs it’s easy to see how this momentary change in price is turning out to be just one of those gentle chart dips that precede a sky rocketing surge in price. Romano is working hard on features and focusing on their implementation, keeping his head low rather than pushing on strong marketing like other alt coins are doing. All this investment on ground breaking properties, most of which are unique to this coin, means that Viacoin is one of those well kept secret in the market. Minimal order books and lack of large investors offering liquidity also help keep this coin in a low-key position, something that is changing as support for larger books is growing. As soon as the market notices this coin and investments go up, we are going to see a rapid surge in the market price, around the 10000 mark by the beginning of January 2018 or late February. Instead of focusing on a public ICO like every altcoin, which means a sudden spike in price followed by inclusion on new exchanges that will dry up volume, this crypto coin is growing slowly under the radar while it’s being well tested and boxes on the roadmap get checked off, one after the other. Romano is constantly working on it and the community around this coin knows, such a strong pack of followers is a feature that no other alt currency has and it’s what will bring it back to the top of the coin market in the near future. His attitude towards miners that are opposed to SegWit is another strong feature to add to Viacoin, especially because of what he thinks of F2Pool and Bitmain’s politics towards soft forks. The Chinese mining groups seem scared that once alternative crypto coins switch to it they’re going to lose leveraging power for what concerns Bitcoin’s future and won’t be able to speculate on the mining and trading market as much as they have been doing in the past, especially for what concerns the marketing market.
It’s refreshing to see such dedication and releases being pushed at a constant manner, the only way to have structural changes in how crypto currencies work can only happen when the accent is put on development and not on just trying to convince the market. This strategy is less flashy and makes sure the road is ready for the inevitable increase in the userbase. It’s always difficult to forecast the future, especially when it concerns alternative coins when Bitcoin is raising so fast. A long term strategy suggestion would be to get around 1BTC worth of this cryptocoin as soon as possible and just hold on it: thanks to the features that are being rolled in as within 6 months there is going to be an easy gain to be made in the order of 5 to 10 times the initial investment. Using the recent market dip will make sure that the returns are maximized. What makes Viacoin an excellent opportunity right now is that the price is low and designed to rise fast, as its Lightning Network features become more mainstream. Lightning Network is secure, instant payment that aren’t going to be held back by confirmation bottlenecks, a blockchain capable to scale to the billions of transactions mark, extremely low fees that do not inhibit micropayments and cross-chain atomic swap that allow transaction across blockchain without the need of a third party custodians. These features mean that the future of this coin is going to be bright, and the the dip in price that started just a while ago is going to end soon as the market prepares for the first of August, when when the SegWit drama will affect all crypto markets. The overall trend of viacoin is bullish with a constant uptrend more media attention is expected , when news about the soft fork will spread beyond the inner circle of crypto aficionados and leak in the mainstream finance news networks. Solid coins like Viacoin, with a clear policy towards SegWit, will offer the guarantees that the market will be looking for in times of doubt. INVESTMENT REVIEW Investment Rating :- A+
https://medium.com/@VerthagOG/viacoin-investment-review-ca0982e979bd
submitted by alex61688 to viacoin [link] [comments]

Will Quantum Computers BREAK Bitcoin Someday? (Explained For Beginners) Wie funktionieren Bitcoin Transaktionen im Detail?  Teil 14 Kryptographie Crashkurs Quantenrechner = Untergang des Bitcoin? René Pickhardt - YouTube Elliptic Curve Digital Signature Algorithm (ECDSA) in NS2

Bitcoin relies on blockchain idea. Blockchain is the public ledger storing all transaction on it and chain of blocks, which are tied up each other with cryptographic algorithms. The first block is called genesis block generated by Satoshi Nakamoto. In Bitcoin, coin creation is defined as mining by miners which depends on Proof-of-Work. Das in Bitcoin verwendete Verschlüsselungsverfahren ist „Elliptic Curve Digital Signature Algorithm (ECDSA)“ unter der standard elliptic curve “secp256k1”, die eine 128-Bit-Verschlüsselung bietet. Transaktionen in Bitcoin. In jedem Währungssystem ist es notwendig Währungseinheiten zwischen Beteiligten zu transferieren. Bitcoin ... PDF On Nov 1, 2019, Yuditha Ichsani and others published The Cryptocurrency Simulation using Elliptic Curve Cryptography Algorithm in Mining Process from Normal, Failed, and Fake Bitcoin ... If you only use Bitcoin addresses one time, which has always been the recommended practice, then your ECDSA public key is only ever revealed at the one time that you spend bitcoins sent to each address. A quantum computer would need to be able to break your key in the short time between when your transaction is first sent and when it gets into a block. It will likely be decades after a quantum ... Eric Rykwalder is a software engineer and one of Chain.com s founders. Here, he gives an overview of the mathematical foundations of the bitcoin protocol. One reason bitcoin can be confusing for beginners is that the technology behind it redefines the concept of ownership. To own something in the traditional sense, be

[index] [474] [21728] [682] [40990] [19074] [33716] [15794] [4947] [19] [49604]

Will Quantum Computers BREAK Bitcoin Someday? (Explained For Beginners)

- Elliptic curve digital signature algorithm (ECDSA) - Signature algorithm - Shor's algorithm - Hashing algorithm - SHA-256 algorithm - Pre-image attack - Collision attack - Proof of Work mining ... Quantenrechner werden wahrscheinlich in der Zukunft verfügbar sein, aber der Bitcoin ist durch eine 2-Layer-Sicherheitstechnik (SHA-256, Elliptic Curve Digital Signature Algorithm) dagegen ... Elliptic Curve Digital Signature Algorithm ECDSA Part 10 Cryptography Crashcourse - Duration: 35:32. Dr. Julian Hosp - Blockchain, Krypto, Bitcoin 5,773 views Elliptic Curve Digital Signature Algorithm (ECDSA) in ns2: To get this project in ONLINE or through TRAINING Sessions, Contact: JP INFOTECH, Old No.31, New No.86, 1st Floor, 1st Avenue, Ashok ... Reference : https://8gwifi.org/docs/window-crypto-ecdsa.jsp The Web crypto api ECDSA algorithm identifier is used to perform signing and verification using t...

#