Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

token cache testing #192

Open
buhtignew opened this issue Sep 27, 2024 · 2 comments
Open

token cache testing #192

buhtignew opened this issue Sep 27, 2024 · 2 comments
Labels
anomaly If something works not as expected enhancement New feature or request

Comments

@buhtignew
Copy link
Collaborator

buhtignew commented Sep 27, 2024

Both anomalies noticed for the token init command in #191 are also present in token cache command.

I.e.

  1. by running token cache -b 100 -a the output mentions that all the 68 decks will be scanned (and not only those initialized), but as result the fresh blocklocator.json file have 97 lines added and none of them seems to contain the addresses of the not initialized decks.
    So it's probably only the issue with the token cache output.
  2. token cache -b 100 command seems to check 4 times whether the blocklocator.json file exists during the execution, at least in case there is no blocklocator.json file in place the message File does not exist. is displayed 4 times.

_
I've tested token cache ATTokenNewSpec2_local -c and there was the following line in the output: Start block: 131739 End block: 559804 Number of blocks: 50000.
So it seemed like instead of scanning the full chain the default, 50k, number of blocks would be scanned.
However the scanning didn't stop after the first 50k were inspected so probably only the last part of the message (Number of blocks: 50000) should be modified.
_
The same command's output has the following line in it: You can interrupt the scan at any time with KeyboardInterrupt (e.g. CTRL-C) and continue later, calling the same command.
For those using pacli_env the CTRL-C combination would not only interrupt the token cache command, but also the pacli_env itself, which is not comfortable.
By other hand the above message slightly conveys the idea that somehow the interrupted scanning would restart from the point where the scanning was halted. Or maybe it's just my impression.
However I was wondering whether it's possible to find the way to gracefully interrupt the scanning so as to preserve the results of the scanning done till that point by adding the last analyzed block's hash and the blocks found till that moment into blocklocator.json. Maybe there is the possibility to launch a processes along the scanning that would wait for a certain key combination as a signal to proceed with the graceful interruption?
_
I'd also like to ask you whether it would make sense/possible to shorten the scanning process output. I.e. right now if the 50k blocks are being scanned there is a line for each 100 blocks outputted, so as when the scanning is finished there is a lot of lines on the screen. Maybe it would be possible to delete each previous line by replacing it with the next one instead thus displaying that line only?
_
There is the following line in the token cache -h for the -b flag: Number of blocks to store (default: 50000) (ignored in combination with -f).
Probably this line has remained untouched since the -f flag was changed into -c so the -f should be edited into -c in this line.
_
I don't know whether I've got it right, but it seems like in case of token cache is run with the -c flag the final block that is being scanned is the highest current block at the moment the command was launched.
However since the blockchain can be quite long, once the scanning is over the current block may be much more ahead, so the -c flag wouldn't have performed the full scan of the blockchain.
Would it make sense to make the code re-running the same command once or even more than once again until the chain isn't truly scanned in full?
_
I've made a complete scanning of the default PoB token. Then I've launched the complete scanning for another PoB tokens (using the token cache 66c25ad60538a9de0a7895d833a4a3aeeacdd75b1db9c5dd69c3746dd21d39be -c command).
The deck is not initialized on my side, but there was no message about, maybe it's not necessary in this case.
The scanning has begun from the startblock 132353 since the output reported First deck spawn at block height: 132353, although the startblock of the deck is 132000. So I assume the deck was created with the startblock that had happened before the deck was actually created.
However since the endblock of the deck is 140000 I was expecting that the scanning would stop at that block, which hasn't happened.
Another expectation I had was that the full scanning I've already done for the main PoB token would be enough to consider the scanning for another PoB token also done, since the burn transactions of one PoB token are also valid for all the others. Or at least I thought the scanning process would be much quicker since the goal of the scanning would be to find the relative claim transactions for the burn transactions that are already known.
But maybe finding the claim transactions is the same as just scanning the full chain or this kind of optimization is not a priority at this stage of our work.

@buhtignew buhtignew added enhancement New feature or request anomaly If something works not as expected labels Sep 27, 2024
@d5000
Copy link

d5000 commented Oct 20, 2024

Both anomalies noticed for the token init command in #191 are also present in token cache command.

#191 was fixed and this should also be not longer a problem here.

50000 blocks shown in the message even with -c option was also a small bug, fixed.

By other hand the above message slightly conveys the idea that somehow the interrupted scanning would restart from the point where the scanning was halted. Or maybe it's just my impression.

Yes, that's the expected behavior. I just tried it and it works this way on my side.

Maybe there is the possibility to launch a processes along the scanning that would wait for a certain key combination as a signal to proceed with the graceful interruption?

Unfortunately I've no idea how to do this, and I'll also not waste time on it. Now it works with the KeyboardInterrupt exception.

Edited: You seem to be lucky, I found this method: https://stackoverflow.com/questions/68474167/how-to-replace-the-keyboardinterrupt-command-for-another-key . If it works I'll implement it this way. The reason however I'm a bit hesitant to fiddle around with this is that keyboard interruptions are OS-dependant, so this may lead to bugs on Windows, FreeBSD etc. . But if like it seems it's not much code then I'll try it.

Edited 2: Unfortunately the module used in this answer, keyboard, creates an additional dependency as this is not a standard module. Is this really worth it?

-f to -c flag was fixed.

I'd also like to ask you whether it would make sense/possible to shorten the scanning process output.

For now I'll put it on standard to 500 blocks.

Would it make sense to make the code re-running the same command once or even more than once again until the chain isn't truly scanned in full?

Re-running would not be my preferred option. If there's an easy way to do this I can look into it but this is low-priority for me.

The deck is not initialized on my side, but there was no message about, maybe it's not necessary in this case.

The P2TH address is not needed to be imported into the node so you can scan all tokens, even non-initialized ones. However, a message should be shown, and in my case it appeared.

> The scanning has begun from the startblock 132353 since the output reported First deck spawn at block height: 132353, although the startblock of the deck is 132000

The deck you mentioned indeed was created at block 132353.

(Edited:) Ah, I think I know what yoy mean now. The deck startblock, i.e. the first block where burn transactions will be accepted for this deck, is indeed 132000. In these cases we could indeed in theory limit the caching before the deck spawn to the startblock.

EDIT: Decided against this. The problem is that if you store a "limited" PoB deck this way, and later you want to cache a deck which accepts all burn transactions, or even those one block earlier, you would need to re-cache everything, but the way it currently works you wouldn't know that caching is incomplete (for simplicity reasons, the start block is not added to blocklocator.json).

Anyway I think the burn address will always be cached fully.

However since the endblock of the deck is 140000 I was expecting that the scanning would stop at that block, which hasn't happened.

This is because after the end block there can be claims and transfers (cards) of this token, and to know the "state" of a PoB/AT token you need both the burn/gateway txes and the claims/transfers.

The "startblock" and "endblock" parameters only refer to the gateway/burn transactions which are valid to claim such tokens. But if the token has startblock 132000 and endblock 140000, then you can burn your coins at 136000 for example and then claim them at block 180000, and then transfer the card at 200000. You need all these block heights in blocklocator.json, including those blocks where other people transfer cards of course, to be able to know the token state. So these blocks are included in the caching process.

(Not sure if still relevant: I've tried it with 'testtoken18x9' (25d70b251166fb01d981cf0cfd420417bf3b2447df8c54863db2a1b51c98545f) and it started to scan exactly at the deck spawn block height (552031).)

Another expectation I had was that the full scanning I've already done for the main PoB token would be enough to consider the scanning for another PoB token also done, since the burn transactions of one PoB token are also valid for all the others.

Only if the deck is significantly newer than the first valid burn transaction, then some time could be saved if you only check the CardTransfer transactions for this deck, and the burn transactions are already saved.

IMO this however does already work (it should start at either the "last cached PoB address block" and the "deck spawn block", whichever is lower). Looked into the code and confirmed that.

The fixes I mentioned are in commit 69c9e64.

@buhtignew
Copy link
Collaborator Author

buhtignew commented Oct 29, 2024

Is this really worth it?

No please don't waste your time with it.

However, a message should be shown, and in my case it appeared.

I've deleted my blocklocator.json file and have run token cache 66c25ad60538a9de0a7895d833a4a3aeeacdd75b1db9c5dd69c3746dd21d39be -c.
There is no hint about the deck not being initialized although token list | grep 66c25ad60538a9de0a7895d833a4a3aeeacdd75b1db9c5dd69c3746dd21d39be output is the following:

| abcd                      | 66c25ad60538a9de0a7895d833a4a3aeeacdd75b1db9c5dd69c3746dd21d39be | PoBTokenLimited2          | mvrm2HAoKqiCmeQiwKMuEFpeEn7rJEmpMz | 1 | 458020 |   |

but maybe it's not relevant.

From here on I haven't understood all of your replies, I'll be posting here my comments later, once I'm ready.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
anomaly If something works not as expected enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants