Transcript for #bitcoin-dev 2017/10/11

10:18 maks25 Hi! I’m confused about the inner workings of bitcoin. As far as I understand: Transactions are chained together by hashing the PublicKey(of the Receiver) and the previous transaction, and then signing it with the PrivateKey(of the Sender). My question is, does each transaction also include the unhashed data? If not, how do we know how many coins are being transferred since all the values are hashed?
10:26 matsjj Hello, I'm currently working on replay protection. I was always under the impression that a defined SIGHASH type is consensus-critical. Reading the code (and getting some feedback from apoelstra), it seems like SIGHASH_ALL is the default, and any value thats not SIGHASH_SINGLE/SIGHASH_NONE will implicitly default to SIGHASH_ALL. (Bonus: Wouldn't this allow a miner to change txhash of all signatures, since any SIGHASH would b
10:26 matsjj e valid?)
10:30 arubi matsjj, the sighash type are 4 bytes that are signed as part of the whole sighash
10:31 arubi the relayed tx has only one byte sighash type in it. if it's unknown, it's treated as ALL afaik
10:31 arubi but still changing it around (or the other 3 bytes that aren't relayed) is invaliding the sig
10:31 arubi the cleanest replay protection is flipping a sighash type bit that is >7
10:32 arubi that way you can both keep the ALL, NONE, etc. stuff the same, and not risk replays defaulting to ALL
10:32 arubi (since there is no way for the other fork to know about the flipped bit at bit >7)
10:33 matsjj I see! Yea this makes sense, otherwise anyone could just change it to SIGHASH_NONE and change the outputs
10:33 matsjj Thanks, very helpful!
10:34 arubi yw
10:37 arubi maks25, can you rephrase? what unsigned data?
10:37 arubi err, unhashed data
10:41 maks25 arubi: I’m just a bit confused about what data is contained within a transaction that miners need in order to verify it. After reading the nakamoto whitepaper I was under the impression that the transaction data within a block is all hashed, wondering if it also contains unhashed data on the transactions.
10:41 arubi a transaction appears in full
10:42 maks25 arubi: great
10:42 arubi it contains the hash of the input transaction that it's spending, so to validate it you'll have to look at that previous input
10:42 maks25 arubi: that makes sense
10:44 esotericnonsense maks25: consider also that the encoding of the block doesn't matter all that much providing you have the data to back it
10:44 esotericnonsense maks25: i'd have to read up to refresh my memory, but i believe compact block transfer essentially uses that
10:45 esotericnonsense maks25: e.g. if you already have a bunch of transactions that exist in a new block you see, then you don't need to redownload them all, you just need the hashes included in that block (if you already have all the transactions in your mempool)
10:46 esotericnonsense (and the ordering of the hashes)
10:46 maks25 esotericnonsense: Yea that makes sense given that you can validate them just by their hashes
10:46 esotericnonsense well no, you need the full transaction to validate the tx, but if you have a prevalidated tx you just need to know whether or not it is included in the merkle tree and where
10:56 maks25 esotericnonsense: And then just check that the Merkle root matches the one in the I getting that right?
11:19 esotericnonsense yeah
11:20 esotericnonsense for example if I know that you already have all transactions I can just send you the block header plus the ordering of the leaves (hashes) in the merkle tree
11:20 esotericnonsense and from that you can construct a block
11:21 esotericnonsense well, I'd have to send you the coinbase tx (because you probably wouldn't have that), and any transactions that the miner knew about that you didn't
11:33 maks25 esotericnonsense: any in-depth readings you would recommend?
11:34 maks25 I’ve probably spent over a week on researching Bitcoin and also Ethereum, but the majority of the stuff I am finding is very broad. (I did read both the white papers though)
12:54 CadelLeeStormer I have a small question regarding nLockTime transactions... If for example today i create such a transaction that cannot be accepted into a block before a year. If i propagate it now to the network. Would it be dropped by nodes or kept by them ?
12:57 GAit CadelLeeStormer: it won't propagate or even be accepted by your own node
13:01 CadelLeeStormer Thanks.
14:20 ghost43 do minikeys have compressed or uncompressed pubkeys? or is it ambiguous?
14:58 wxss ghost43: AFAICT the only specification I can find ( doesn't mention that. It only defines how you derive the minikey from a private key.
14:59 ghost43 great....... :P thanks
14:59 wxss You can then use that private key to either create a compressed or uncompressed pubkey
14:59 wxss Buit maybe someone else knows about a better (more complete) specification of minikeys
15:00 ghost43 well obviously. you can create all kinds of scriptPubKeys and addresses from a private key. but apps can't be expected to look up and monitor all of them. one could use a minikey to create a native p2wpkh address..
15:01 wxss exactly, so the specification is incomplete.
15:36 matsjj @arubi Out of curiousity - does that mean Bitcoin Cash is just policy protected? It looks like they just used a different SIGHASH, without changing any of the other bits when preparing the SignatureHash?
15:36 matsjj See
15:38 arubi matsjj, they changed the entire signature scheme. they're using bip143 for all transactions, and don't have segwit scriptpubkeys at all
15:38 arubi so even if the client could parse and try to check the sig, it would still be way wrong because the incorrect scheme is applied
15:39 matsjj Ah I see, I guess there are no segwit outputs that could be consumed that way in their chain
15:40 arubi you could send to a segwit program, but that would just be anyone-can-spend script with no further soft fork rules for checksigs
15:44 arubi matsjj, one comment I wanted to make on your PR, now with segwit live, there's a weird edge case that can happen when only using sighash bytes for replay protection
15:45 arubi say I have funds in a segwit output A, I can create a replay protected tx to B on the s2x chain, then after a while create a sighash ALL tx from the same A to the same B on the bitcoin chain and still retain the same txid
15:46 arubi this isn't actually replay, but might cause problems for services
15:47 matsjj Oh of course! It is indeed an edge case, as the transaction on the bitcoin chain would need the new sighash too (which is non-standard)
15:47 matsjj But an interesting one
15:47 arubi no what I mean s
15:47 arubi is*
15:47 matsjj Oh you are right, since the signature (and sighash type) is no longer part of the txhash
15:47 arubi it's because signatures are not part of the txid anymore. for non segwit outputs this isn't true because a different bit in the first byte will change the txid
15:47 arubi right
15:48 matsjj So while there is no risk of replay YET, they also cannot be considered _split_ yet.
15:48 arubi yea it's a very weird state. essentially the same transaction
15:48 matsjj Since the exact same outputs live on both chains
15:49 arubi the only clean place to flip another bit is the transaction version
15:49 arubi but it has to be one of the invalid bits else I can still do that on both chains
15:50 matsjj Funny. Thats certainly something to keep in mind when it comes to writing an actual specification for this, for who ever wants to build on top of this.
15:50 matsjj I was looking into that too - which bits are actually invalid?
15:50 arubi hmm, good question, my guess is anything which will make it negative (if it can be even heh)
15:51 arubi but there's got to be such a bit
16:07 arubi matsjj, s/the only place/the only other place/
16:08 arubi because really if you flip an invalid bit in the version, then you're done. no need for elaborate sighash bytes anymore
16:21 grubles i'm going to take another stab at gitian building
16:21 grubles this time with virtualbox
16:21 arubi what did you try before? lxc right?
16:22 grubles yeah
16:23 arubi was looking into the docker thing in the gitian doc, but seems like it's from 2015, using debian wheezy.. probably too old for current builds
16:26 grubles yeah i've tried using docker previously too
16:26 grubles a few months ago
16:26 grubles the virtualbox guide uses wheezy too
16:26 grubles er no
16:26 grubles jessie
16:34 grubles wait i think the virtualbox route uses lxc too...
16:35 arubi probably eventually
16:45 matsjj arubi: I'm not sure wrt nVersion. in bitcoinJ and BitcoinJS it's deserialising an UNsigned 32bit, while the bitcoin wiki states that this is a signed 32bit.
16:47 matsjj Also I think if we are to mess with it in a way that bitcoin-core sees it as invalid, there's a fair chance other receiving wallets would perceive it as broken too. That was the _good_ part about using SigHash, as lite clients normally don't look at the signature.
16:48 wumpus in bitcoin core nVersion is signed
16:48 arubi there's no other way that I can think of to fix the edge case when segwit is active
16:50 matsjj wumpus, oh, byte representations are the same until the last bit. So flipping the topmost bit would render the transaction invalid in bitcoin-core, is that correct?
17:05 matsjj Ah doesn't look like this is actually enforced anywhere
17:06 arubi matsjj, I'm pretty sure that a negative version is invalid
17:21 phantomcircuit arubi, they're not
17:21 phantomcircuit just non-standard
17:21 matsjj arubi, so far I've only found that negative block numbers are disallowed (since BIP 34)
17:21 phantomcircuit there's transactions with negative versions in the chain
17:21 arubi oh really, that's a good bit of trivia
17:23 matsjj Ah indeed. if you ever need to prove it to someone
17:24 arubi not bad :)
17:46 grubles ok yeah there's definitely something up with my networking
17:46 grubles because there is a -dev discussion from 2015 about the same issue
17:46 arubi it's dns isn't it
17:47 grubles not sure
17:47 grubles i can ping from the gitian-builder VM
17:47 arubi so it's just that apt-cacher-ng isn't running there
17:48 grubles of course the logs end prematurely
17:48 arubi at that destination that is
17:49 grubles it seems
17:49 grubles that should be running on the host, right?
17:49 arubi whatever host has the interface with the ip
17:50 arubi usually it's something that runs on its own container
17:51 arubi but I'm not sure about how gitian expects it
17:52 grubles i would assume the gitian host runs the cacher
17:52 grubles and the container connects to the host
17:52 arubi where can you pint from?
17:52 arubi s/ping
17:53 grubles the debian gitian host
17:53 arubi ah, and does `sudo ifconfig -a` says something about an interface with that ip?
17:54 grubles nope
17:54 grubles br0 is on
17:54 grubles eth0 is on
17:55 grubles so i'm not even sure what has ....
17:55 arubi can you try `sudo netstat -tulpan | grep ''`
17:56 arubi ssh into it? see if it accepts the same credentials as the host..
17:56 grubles ah i can ssh in
17:56 arubi hostname?
17:56 grubles "debian"
17:56 grubles same as host
17:57 arubi `which apt-cacher-ng` returns anything?
17:57 grubles no
17:57 arubi try the netstat again from within the ssh
17:57 arubi or I guess the ifconfig will return the same output if it's the same system
17:58 arubi lots of ways to know, create a file in /tmp :)
17:58 grubles yeah same ifconfig output
17:58 arubi alright, you could try to install and run apt-cacher-ng on the host then, or make it route to some debian mirror maybe?
17:59 grubles hm it's already installed
17:59 arubi huh, so just not in your path
18:00 arubi `sudo which apt-cacher-ng` ?
18:00 arubi or `sudo /etc/init.d/apt-cacher-ng status`
18:00 grubles /usr/sbin/apt-cacher-ng
18:00 grubles ah yea
18:00 arubi `sudo /etc/init.d/apt-cacher-ng start` probably :)
18:00 grubles it's running
18:01 arubi I'm trying to remember if there was some stuff I had to do to make it run (was unrelated..)
18:01 arubi I can't remember anything specific. iirc it was upping a container with the service running, then other containers just fetch stuff from it
18:02 arubi can you `apt-get update` on the host without errors?
18:02 grubles maybe i have to specify which address to bind to in /etc/apt-cacher-ng/acng.conf
18:02 grubles yea apt-get update works
18:03 arubi it looks like binding to the address is correct, the guest expects this ip
18:04 arubi some elaborate test you can try is to point your host's sources.list file to your own apt-cacher-ng's ip and try to apt-get update again
18:04 arubi if that succeeds, then the service is running correctly at least
18:11 grubles hm couldn't bind to socket: cannot ssign requested address
18:11 grubles assign, rather
18:12 arubi where's that from?
18:12 grubles /etc/init.d/apt-cacher-ng status
18:15 grubles after i added BindAddress: localhost to acng.conf
18:17 arubi did you restart it first?
18:17 grubles yes
18:18 arubi wait why call that address localhost?
18:18 arubi call it debian or whatever
18:18 arubi localhost is 127.x.x.x stuff
18:19 arubi did you try the netstat command before you edited acng.conf actually?
18:19 arubi I'm interested if it was already running there or not. something did respond to ping
18:21 grubles doesn't seem to be running when ssh'd into
18:21 grubles based on the netstat
18:23 arubi netstat just shows port 22 then?
18:23 grubles yeah 22 and 22222
18:24 arubi okay, can you revert what you did on acng.conf and instead try to set your host's 'sources.list' file to fetch from ?
18:24 arubi (and restart -ng so it's running again)
18:25 grubles still unable to connect
18:25 arubi oh so it wasn't running before either then?
18:26 grubles /etc/init.d/apt-cacher-ng status shows that it's running
18:28 arubi and pointing your host's sources.list to now fails apt-get update?
18:29 grubles right
18:30 arubi does the apt-cacher service have a log file in /var/log maybe?
18:30 arubi or probably it's documented somewhere
18:32 grubles yeah in /var/log/apt-cacher-ng/apt-cacher.log
18:32 grubles looks like it's cached the ubuntu packages
18:32 grubles from the base-vm
18:33 arubi which port is it supposed to be running on?
18:33 arubi try `netstat -tulpan | grep <port>`..
18:34 grubles 3142
18:34 grubles looks like it's bound to
18:34 grubles hm
18:34 arubi great..
18:35 arubi try to set your sources.list to localhost instead of that ip and update
18:36 arubi maybe something is routing to localhost for some traffic but not all? very weird
18:37 grubles yeah i can't find any documentation on it
18:37 grubles setting to localhost fails to
18:37 arubi does `netcat 3142` connect at all?
18:38 grubles no
18:38 arubi do you have the pid of the process bound to on that port?
18:39 arubi should be on that netstat line
18:39 grubles 20463
18:39 arubi `sudo kill -15 20463`
18:39 arubi then `ps -p 20463` , see that it's gone
18:40 grubles ok it's kliled
18:40 grubles killed, rather
18:40 arubi alright, try to sudo restart cacher-ng again
18:40 arubi (clean .conf)
18:41 grubles ok done
18:41 arubi okay, netcat to and its port, see if it connects
18:41 grubles nope
18:42 arubi so it's just "connection refused" ?
18:42 grubles correct
18:42 arubi and I bet netstat shows it running and bound to ?
18:43 grubles yep!
18:44 arubi I think we need to call a scientist
18:44 grubles haha
18:47 bugs_ what about firewall
18:53 arubi on the host VM?
18:54 arubi if it's anything, it's between the host VM and the metal host. I don't know much about qubes os
18:55 grubles oh this isn't on qubes
18:56 grubles it's on vanilla fedora 26
18:56 grubles i figured there was some esoteric VM separation mechanism in qubes
18:57 grubles well...of course there is...but i mean specifically for networking in this context
18:59 arubi can you try, stop the apt-cacher service, then try to ssh from the host vm into again
19:00 arubi see if if is the cacher's thing or some actual interface
19:00 Sentineo maybe you need to use a netns? :)
19:01 grubles ssh still works
19:01 arubi maybe now you can create an ssh tunnel from to a debian repo's ip and port..
19:02 arubi then when the guest tries it, it'll just be redirected? it's all so weird :)
19:02 arubi Sentineo, I guess he could if you could write the gitian docs for it :P
19:03 arubi it's so weird, it's running, it's on the correct port, it's listening to all interfaces..
19:04 grubles where is it even specified to use the apt-cacher instead of just the actual repos
19:04 Sentineo I am too lazy to scroll up what it is all about :)
19:04 grubles and why
19:04 Sentineo but sounds like a vm thing, container routing issue
19:05 arubi probably so you could easily fix versions without them disappearing from the actual repos
19:05 grubles makes sense
19:06 arubi is your network capable of ipv6 stuff? mine isn't and I always disable it everywhere as a habit
19:06 arubi (specifically apt-get often fails on new vms and stuff)
19:06 grubles my local net is
19:06 grubles but not supported by my isp
19:07 arubi hm
19:07 Sentineo u use 6to4?
19:07 arubi oh no no
19:07 grubles no
19:07 Sentineo then turn it off on your local net
19:08 arubi did cacher-ng listen on ipv6 also on that netstat stuff?
19:08 Sentineo router advertisements triger that shit automatically :)
19:10 grubles arubi, yea
19:11 arubi probably best to turn it all off then, disable ipv6 on the host vm
19:13 arubi also on acng.conf :
19:13 arubi Virtual page accessible in a web browser to see statistics and status
19:13 arubi # information, i.e. under http://localhost:3142/acng-report.html
19:13 arubi ReportPage: admin
19:13 arubi might be worth to check if that's reachable at all in a browser from the host vm
19:22 grubles i gotta run for a bit
19:22 arubi no worries
20:32 grubles ok yeah let's see if i can get the report page
20:35 grubles i can curl http://localhost:3142 from the host
20:40 grubles it's.......seemingly working now
20:41 grubles wtf
20:41 grubles (it being the build script)
20:57 arubi hah, weird.