Logged Conversation (all times UTC)
[02:39] <stellar-slack> since there's no s3 on vagrant I'm using pythons SimpleHTTPServer for history, could have something to do with that though I don't see any history related errors in the logs
[02:39] <stellar-slack> just curling between hosts seems to work fine though
[04:52] <stellar-slack> What is /.well-known/ for?
[04:52] <stellar-slack> `cp: cannot stat '[path]history/vs/.well-known/stellar-history.json': No such file or directory`
[05:02] <stellar-slack> @donovan: `PEER_SEED` is safe to set. If you crank up a validator let me know!
[05:02] <stellar-slack> ```COMMANDS=[
[05:02] <stellar-slack> "ll?level=info"
[05:02] <stellar-slack> ]```
[05:02] <stellar-slack> ^ gave some insight the other day when things were rough-ish (reduced quorum appropriately).
[05:02] <stellar-slack> Everything seems to be chugging along swimmingly.
[05:03] <stellar-slack> Or swimming along chuggishly :simple_smile:
[05:03] <stellar-slack> the well-known thing is from https://tools.ietf.org/html/rfc5785 - I believe it's a jumping off point for nodes to pull history from each other
[05:04] <stellar-slack> thanks @matschaffer - I did uncomment [H1] so maybe that's why it's popped up?
[05:04] <stellar-slack> it's related, basically it's trying to get history from `[path]history/vs/.well-known/stellar-history.json` but it's not there
[05:04] <stellar-slack> one of the get= statements probably matches
[05:06] <stellar-slack> I ask get to fetch from History.1 (or w/e it's called). No PUT there in the sample .cfg so I left it alone.
[05:07] <stellar-slack> it wouldn't sync before uncommenting at least 1 history.. might have been bad timing on my part for trying to connect 'tho.
[05:08] <stellar-slack> ```[HISTORY.h1]
[05:08] <stellar-slack> get="curl -sf https://s3-eu-west-1.amazonaws.com/history.stellar.org/prd/core-testnet/core-testnet-001/{0} -o {1}"```
[05:09] <stellar-slack> do you have a `cp` in there? https://github.com/stellar/stellar-core/blob/master/docs/stellar-core_example.cfg has it under HISTORY.vs
[05:10] <stellar-slack> jed: the full network on master (03e99d7) shows all 3 on "state" : "APP_ACQUIRING_CONSENSUS_STATE" - so at least 3 doesn't get stuck at booting
[05:10] <stellar-slack> ```[HISTORY.vs]
[05:10] <stellar-slack> get="cp ~/stellar-core/bin/tmp/stellar-core/history/vs/{0} {1}"
[05:10] <stellar-slack> put="cp {0} ~/stellar-core/bin/tmp/stellar-core/history/vs/{1}"
[05:10] <stellar-slack> mkdir="mkdir -p ~/stellar-core/bin/tmp/stellar-core/history/vs/{0}"```
[05:12] <stellar-slack> I understand ./tmp gets rm -f -- should I direct it somewhere more permanent?
[05:14] <stellar-slack> (I mean ./tmp as in path from stellar-core)
[05:16] <stellar-slack> that looks reasonable enough so long as the user running stellar-core has permission to create those directories
[05:17] <stellar-slack> yeah, it does... however can't stat as it is never actually created
[05:17] <stellar-slack> oh, also is stellar-core running in ~/stellar-core/bin/ ?
[05:17] <stellar-slack> if so that's probably dangerous
[05:17] <stellar-slack> since by default it'll expect to have control of tmp in the cwd
[05:17] <stellar-slack> ya - this is on a VM
[05:17] <stellar-slack> seperate from history
[05:18] <stellar-slack> if you want to put history in that particular location you should set TMP_DIR_PATH="..." to put tmp somewhere else
[05:18] <stellar-slack> though I would probably opt to move history
[05:18] <stellar-slack> rgr that -- I'll move history to somewhere more permanent
[05:18] <stellar-slack> then you can use the default TMP_DIR_PATH and BUCKET_DIR_PATH which assume they can control `pwd`/buckets and `pwd`/tmp
[05:20] <stellar-slack> so, for clarification, /.well-known/ is transient historical share ?
[05:27] <stellar-slack> it's part of the historical share, not sure how transient it is
[05:28] <stellar-slack> I believe it's a bit of an entry point into the rest of the history store
[05:31] <stellar-slack> @matschaffer: do you think it might have been a lucky coincidence that this node sync'd quickly after uncommenting the [H1] (i.e. bad timing on my part when firnig it up) ?
[05:32] <stellar-slack> I mean.. I read "give it 5 minutes" (paraphrasing) but it sync'd in about 70 secs from Aussie after I got H1
[05:33] <stellar-slack> tbh, with the new status messages I'm unclear on what "sync'd" even looks like :wink:
[05:34] <stellar-slack> are you sure it's synced?
[05:34] <stellar-slack> yeah, positively
[05:34] <stellar-slack> how?
[05:34] <stellar-slack> cause I'd like to know if I'm synced
[05:34] <stellar-slack> :stuck_out_tongue:
[05:34] <stellar-slack> lemme copy/pasta
[05:34] <stellar-slack> brb!
[05:34] <stellar-slack> :thumbsup:
[05:36] <stellar-slack> ```2015-04-13T15:04:56.922 1e6e68 [Ledger] INFO Got consensus: [seq=62157, prev=5203b5, time=1428903293, txs=0, txhash=aca293, fee=10]
[05:36] <stellar-slack> 2015-04-13T15:04:56.925 1e6e68 [Ledger] INFO Closed ledger: [seq=62157, hash=97a54e]```
[05:36] <stellar-slack> ``` "slot" : [
[05:36] <stellar-slack> {
[05:36] <stellar-slack> "ballot" : "(0,b025a8)",
[05:36] <stellar-slack> "committed" : true,
[05:36] <stellar-slack> "heard" : true,
[05:36] <stellar-slack> "index" : 62163,
[05:36] <stellar-slack> "pristine" : true,
[05:36] <stellar-slack> "statements" : [
[05:36] <stellar-slack> "b:(0,b025a8) n:3b8711 q:df53a1 ,PREPARING",
[05:36] <stellar-slack> "b:(0,b025a8) n:000000 q:df53a1 ,PREPARED",
[05:36] <stellar-slack> "b:(0,b025a8) n:3b8711 q:df53a1 ,PREPARED",
[05:36] <stellar-slack> "b:(0,b025a8) n:000000 q:df53a1 ,COMMITTING",
[05:36] <stellar-slack> "b:(0,b025a8) n:3b8711 q:df53a1 ,COMMITTING",
[05:36] <stellar-slack> "b:(0,b025a8) n:000000 q:df53a1 ,COMMITTED",
[05:36] <stellar-slack> "b:(0,b025a8) n:3b8711 q:df53a1 ,COMMITTED"
[05:36] <stellar-slack> ]
[05:37] <stellar-slack> that blob is part of info?
[05:38] <stellar-slack> the stdout shit?.. yeah
[05:38] <stellar-slack> ```{
[05:38] <stellar-slack> "info" : {
[05:38] <stellar-slack> "ledger" : {
[05:38] <stellar-slack> "age" : 2,
[05:38] <stellar-slack> "closeTime" : 1428903453,
[05:38] <stellar-slack> "hash" : "705602b8bb93cdaec6bec6ef2e2c2f2c348890e5433a9bd1dd9f6b38b5af5b19",
[05:38] <stellar-slack> "num" : 62190
[05:38] <stellar-slack> },
[05:38] <stellar-slack> "numPeers" : 3,
[05:38] <stellar-slack> "state" : "Synced"
[05:38] <stellar-slack> }
[05:38] <stellar-slack> }
[05:38] <stellar-slack> ```
[05:38] <stellar-slack> oh, that json blob is in the stdout too
[05:38] <stellar-slack> I only get 3 peers btw
[05:38] <stellar-slack> this is on https://github.com/stellar/stellar-core/commit/03e99d77f9b21eaf8828cd8736bcde77dccda4be peering to SDF?
[05:39] <stellar-slack> we only run 3 so that may be right
[05:39] <stellar-slack> :simple_smile:
[05:39] <stellar-slack> I think donch reported early that he had 3
[05:39] <stellar-slack> mighta been someone else
[05:40] <stellar-slack> but you guys are the only public connects for now anyways
[05:41] <stellar-slack> I could confirm IPs for you if you want. Hopefully you see external IPs.. not sure on that now that I think about it
[05:43] <stellar-slack> they're all 54's which kinds makes sense
[05:43] <stellar-slack> that's good
[05:44] <stellar-slack> 54.227.13.198 54.161.230.185 and 54.225.54.235 are us
[05:44] <stellar-slack> oh.. I also knocked the quorum down to 1 before shit started working.. unsure if it was due to denial of service to "all but 1" of your lot
[05:46] <stellar-slack> again - I might have started this VM at a less than appropriate time
[05:46] <stellar-slack> oh... that's probably what's got you moving then
[05:46] <stellar-slack> I think quorum=1 would mean your node continues on it's own regardless of what testnet feeds it
[05:48] <stellar-slack> I've unset validating.. just sucking up what I'm instructed to trust by `3b8711`
[05:48] <stellar-slack> scott: heads up, I cut a pre-alpha1 horizon release - simplifies the vagrant stuff if I can just grab a jar rather than the full build https://github.com/stellar/horizon/releases/tag/pre-alpha1
[05:49] <stellar-slack> epsilon: ah. interesting.
[05:49] <stellar-slack> it seems to be agreeing with, so hes pretty cool guy
[05:52] <stellar-slack> hrm... I wonder what that even translates to cause I don't have 3b8711 in our list of keys
[05:54] <stellar-slack> or much of anything else really (logs, peer lists)
[05:56] <stellar-slack> oh
[05:56] <stellar-slack> I saw him 1st time when I had a minion tag, too (aeb something rather than 000000)
[05:58] <stellar-slack> do you think this might be loopback and I'm agreeing with myself?
[05:59] <stellar-slack> idk.. 3 peers, sync'd
[05:59] <stellar-slack> seem possible, but I wouldn't know how to find out for sure
[06:00] <stellar-slack> I'm starting a single-peer vm just to see what that behavior looks like
[06:00] <stellar-slack> coolio
[06:00] <stellar-slack> not sure what quorum=1 without validation would lead to at this point. Haven't actually tried a non-validating node yet
[06:00] <stellar-slack> I did assume that 1 === 1 + me
[06:00] <stellar-slack> if validation's on 1 == me, I've seen that
[06:01] <stellar-slack> I guess you could find out pretty quickly
[06:01] <stellar-slack> I'm 0's on vlaidating atm
[06:01] <stellar-slack> remove the list of preferred peers and see what happens
[06:01] <stellar-slack> ```{
[06:01] <stellar-slack> "peers" : [
[06:01] <stellar-slack> {
[06:01] <stellar-slack> "id" : "gsQfnr7mWcFqpFddmp9J6cMxEqk8C3BwqQdmD42bmYPTcPLHmNR",
[06:01] <stellar-slack> "ip" : "54.161.230.185",
[06:01] <stellar-slack> "port" : 39133,
[06:01] <stellar-slack> "pver" : 1,
[06:01] <stellar-slack> "ver" : "c55a8b6"
[06:01] <stellar-slack> },
[06:01] <stellar-slack> {
[06:01] <stellar-slack> "id" : "gsdbsJKV6uNaLz7VPDRKzb8ZR5ao26L3eT1kioMu32Yg2mBtsgF",
[06:01] <stellar-slack> "ip" : "54.227.13.198",
[06:01] <stellar-slack> "port" : 39133,
[06:01] <stellar-slack> "pver" : 1,
[06:01] <stellar-slack> "ver" : "c55a8b6"
[06:01] <stellar-slack> },
[06:01] <stellar-slack> {
[06:01] <stellar-slack> "id" : "gsj7qk6sDE2Rv979bz3kwzuaQ9pRVKSBLYtrD6NNuLriqRW7ccQ",
[06:02] <stellar-slack> disconnect: instant freeze
[06:03] <stellar-slack> ```2015-04-13T15:32:56.585 1e6e68 [Overlay] INFO TCPPeer::drop@39133 to 39133 in state 2
[06:03] <stellar-slack> 2015-04-13T15:32:56.900 1e6e68 [Overlay] INFO TCPPeer::drop@39133 to 39133 in state 2
[06:03] <stellar-slack> 2015-04-13T15:32:57.411 1e6e68 [Overlay] INFO TCPPeer::drop@39133 to 39133 in state 2
[06:03] <stellar-slack> 2015-04-13T15:32:58.259 1e6e68 [Overlay] INFO New connected peer 54.225.54.235:39133
[06:03] <stellar-slack> 2015-04-13T15:32:58.261 1e6e68 [Overlay] INFO New connected peer 54.161.230.185:39133
[06:03] <stellar-slack> 2015-04-13T15:32:58.263 1e6e68 [Overlay] INFO New connected peer 54.227.13.198:39133
[06:03] <stellar-slack> 2015-04-13T15:32:58.263 1e6e68 [Overlay] WARN @39133 connectHandler error: Network is unreachable
[06:03] <stellar-slack> 2015-04-13T15:32:58.263 1e6e68 [Overlay] INFO TCPPeer::drop@39133 to 39133 in state 0
[06:03] <stellar-slack> 2015-04-13T15:32:58.263 1e6e68 [Overlay] WARN @39133 connectHandler error: Network is unreachable
[06:03] <stellar-slack> 2015-04-13T15:32:58.263 1e6e68 [Overlay] INFO TCPPeer::drop@39133 to 39133 in state 0
[06:03] <stellar-slack> 2015-04-13T15:32:58.263 1e6e68 [Overlay] WARN @39133 connectHandler error: Network is unreachable
[06:03] <stellar-slack> 2015-04-13T15:32:58.263 1e6e68 [Overlay] INFO TCPPeer::drop@39133 to 39133 in state 0
[06:03] <stellar-slack> ```
[06:04] <stellar-slack> well, I was thinking like bring it up with no set peers & validation off - just dropping the network could be a different case
[06:05] <stellar-slack> though really I'm just spitballing
[06:05] <stellar-slack> rgr that, will try
[06:05] <stellar-slack> I'm sure it'd be a more efficient use of time to have this discussion with the core devs around but if you're bored I'm around :simple_smile:
[06:05] <stellar-slack> haha you're the gun now, Champ
[06:05] <stellar-slack> step up!
[06:06] <stellar-slack> (Also, pay rise!)
[06:06] <stellar-slack> haha
[06:06] <stellar-slack> well, I'm also elbow-deep in a horizon vagrant setup
[06:07] <stellar-slack> on reconnection while s-core is still running, it is trying to sync back up as expected from the external nodes
[06:07] <stellar-slack> I think it's def doing everything correctly
[06:07] <stellar-slack> and quite fast, too
[06:16] <stellar-slack> or.. maybe not. It appears to be receiving consensus but is awaiting catchup on a checkpoint
[06:16] <stellar-slack> of which it might never find
[06:16] <stellar-slack> yay! we broke it!
[06:17] <stellar-slack> no, we didn't
[06:18] <stellar-slack> fucken thing pwned me
[06:18] <stellar-slack> it pretended "idk wtf m8?" for awhile
[06:18] <stellar-slack> then spat out a big hep of "let's replay the ledger"
[06:18] <stellar-slack> and.. back to no problems
[06:19] <stellar-slack> stellar-core's got sass
[06:19] <stellar-slack> I'm sure this is mostly smoke and mirros
[06:19] <stellar-slack> with an r !
[06:23] <stellar-slack> for reference:
[06:23] <stellar-slack> ```2015-04-13T15:44:33.893 1e6e68 [Ledger] INFO Replaying buffered ledger-close for 62627
[06:23] <stellar-slack> 2015-04-13T15:44:33.894 1e6e68 [Ledger] INFO Replaying buffered ledger-close for 62628
[06:23] <stellar-slack> 2015-04-13T15:44:33.896 1e6e68 [Ledger] INFO Replaying buffered ledger-close for 62629
[06:23] <stellar-slack> 2015-04-13T15:44:33.896 1e6e68 [Ledger] INFO Replaying buffered ledger-close for 62630
[06:23] <stellar-slack> 2015-04-13T15:44:33.897 1e6e68 [Ledger] INFO Replaying buffered ledger-close for 62631
[06:23] <stellar-slack> 2015-04-13T15:44:33.898 1e6e68 [Ledger] INFO Replaying buffered ledger-close for 62632
[06:23] <stellar-slack> 2015-04-13T15:44:33.900 1e6e68 [Ledger] INFO Caught up to LCL including recent network activity: [seq=62632, hash=8b031a]
[06:23] <stellar-slack> 2015-04-13T15:44:33.900 1e6e68 [Ledger] INFO Got consensus: [seq=62633, prev=8b031a, time=1428905673, txs=0, txhash=88adc3, fee=10]
[06:23] <stellar-slack> cp: cannot stat '~/stellar-core/bin/tmp/stellar-core/history/vs/.well-known/stellar-history.json': No such file or directory
[06:23] <stellar-slack> 2015-04-13T15:44:33.901 1e6e68 [Ledger] INFO Closed ledger: [seq=62633, hash=c9fea8]
[06:23] <stellar-slack> ```
[06:24] <stellar-slack> "cannot stat" is likely my poor use of .cfg
[06:25] <stellar-slack> k, so I can get that same behavior if I comment out validation seed & set quorum=1
[06:26] <stellar-slack> or with the validation seed in there too
[06:29] <stellar-slack> OT: fee=10 means every ledger is taking a fee, regardless of txs=0
[06:30] <stellar-slack> "of" -> "that"
[06:32] <stellar-slack> no that is just the base fee
[06:32] <stellar-slack> in stroops
[06:39] <stellar-slack> jed: so we'll always have immediate access to basal things such as network fee?
[06:41] <stellar-slack> jed: sry.. I mean are there any other basal things that are planned in that response?
[07:50] <stellar-slack> scott: also let me know if you have anything in the works for horizon rake tasks when running in-jar. I think I'm gonna close the ticket about running on jboss, didn't realize torquebox 4 was such a different beast than 3 - hopefully when torquebox 4 gets jboss eap support we can pick it up pretty easily
[14:56] <stellar-slack> mat: I'll be working on the rake setup today
[16:21] <stellar-slack> thanks tigre, someone from our dev team will be reaching out to you to make sure we can work on this w any helpful info you might have
[16:34] <stellar-slack> my pleasure ! Plz don't hesitate to contact me for any further assistance.
[16:56] <stellar-slack> http://zc.qq.com/en/index.html sign up qq account with email.
[19:59] <stellar-slack> Getting rake tasks to run within the torquebox packaged jar is a turning out to be pretty tough. Warbler apparently supports a simple system for running rake tasks from inside a jar… checking that out now
[20:39] <stellar-slack> `ERROR readHeaderHandler error: End of file [TCPPeer.cpp:291]`
About StellarVerse IRC Logger
StellarValue IRC Logger
is part of