Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wip: l1 block hashes poc #1016

Draft
wants to merge 59 commits into
base: develop
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 25 commits
Commits
Show all changes
59 commits
Select commit Hold shift + click to select a range
3330663
wip: add apply l1 block hashes txs logic
lastminutedev Oct 25, 2023
2a6c74f
wip: add chunk trace and block context fixes
lastminutedev Oct 25, 2023
18b18e0
feat: add L1Blocks contract
lastminutedev Oct 27, 2023
e20e5ca
fix: add L1Blocks contract fix and unit tests
lastminutedev Nov 10, 2023
fd47940
fix: add fix
lastminutedev Nov 13, 2023
dc90862
test: fix database test
lastminutedev Nov 13, 2023
151631e
script: L1Blocks deployment
failfmi Nov 14, 2023
fe21b54
fix: add update chunk prover task
lastminutedev Nov 14, 2023
aa4005a
feat(contracts): implement and test `L1ViewOracle`
reo101 Nov 14, 2023
68eff41
test: fix tests
lastminutedev Nov 14, 2023
07e5fed
test: fix tests
lastminutedev Nov 15, 2023
708b748
test: fix tests
lastminutedev Nov 15, 2023
b135dbc
script: L1ViewOracle deployment
failfmi Nov 16, 2023
24f9020
wip: add get l1 block range hash directly from l1 client for chunk pr…
lastminutedev Nov 17, 2023
49ecf53
fix: fix issues after rebase
lastminutedev Nov 20, 2023
9037ce9
wip: add last applied l1 block to chunk data
lastminutedev Nov 21, 2023
0649057
wip: add update chunk info and batch header
lastminutedev Nov 22, 2023
b484a71
wip: add update chunk/batch prove/verify
lastminutedev Nov 23, 2023
aaa6eca
wip: add update chunk/batch prove/verify
lastminutedev Nov 27, 2023
a2f9474
compile(contracts): add temporary via-ir flag
failfmi Nov 28, 2023
450cf38
build(coordinator): explicit amd64 platform
failfmi Nov 28, 2023
d147e85
build(docker): coordinator and rollup image names
failfmi Nov 28, 2023
cfce8b2
fix: scroll l1 contract
lastminutedev Nov 28, 2023
e99e26f
compile(contracts): remove via-ir flag
failfmi Nov 30, 2023
6507e1d
refactor(contracts/ScrollChain): stack too deep; fix ChunkCode.lastAp…
failfmi Nov 30, 2023
74da939
inline-docs(contracts/ScrollChain): commitChunk return argument
failfmi Nov 30, 2023
0616d6c
docker: switch push org name
failfmi Dec 1, 2023
aa05f80
fix(rollup): batch l1 block range hash
failfmi Dec 6, 2023
02ed852
build(docker/rollup-relayer): explicit platform
failfmi Dec 6, 2023
16ff818
feat(rollup): use forked go-ethereum
failfmi Dec 6, 2023
a0d5912
fix(contracts/l1vieworacle): remove from check
failfmi Dec 6, 2023
dbeed30
feat(rollup): log chunk process block range
failfmi Dec 6, 2023
ce16012
fix(rollup/chunk-proposer): blockRangeHash parse result
failfmi Dec 7, 2023
7fafc28
feat(rollup)!: rebuild `ScrollChain` contract ABI
reo101 Dec 7, 2023
72fe2d2
test: fix test
lastminutedev Dec 12, 2023
2678c84
fix(types/chunk): encode
failfmi Dec 13, 2023
e9a7c86
fix(types/chunk): exclude l1 block hashes
failfmi Dec 13, 2023
d2eac6e
fix(types/batch): encode/decode
failfmi Dec 13, 2023
3dd1a5d
fix(contracts/ScrollChain): batch and chunk ptr loadings
failfmi Dec 19, 2023
f951b9b
fix(types/chunk): missing l1 block hashes info in Hash
failfmi Dec 19, 2023
ab0cba5
fix(contracts/ScrollChain): handle chunks, which have the same lastAp…
failfmi Dec 19, 2023
5459003
fix(rollup): incorrect chunk and wrapped block lastAppliedL1BlockNum …
failfmi Dec 21, 2023
aeccdb0
fix(common/types): chunk encoding to match; block txs to include l1 b…
failfmi Dec 23, 2023
83f5c84
fix(prover): add gen chunk fix
lastminutedev Jan 10, 2024
6d51b40
fix(prover): add fix prover gen chunk trace
lastminutedev Jan 10, 2024
82ef045
fix: try fix zktrie build issue on linux
lastminutedev Jan 10, 2024
b314ee3
fix(coordinator/orm/block): missing query last applied l1 block column
failfmi Jan 10, 2024
19ddb66
build(coordinator_cron): explicit amd64 platform
failfmi Jan 10, 2024
a7adb12
fix(libzkp): add fix chunk trace
lastminutedev Jan 11, 2024
6bf507b
Revert "fix: try fix zktrie build issue on linux"
lastminutedev Jan 11, 2024
2729a13
feat(contracts/l1blocks): switch appendBlockhashes msg.sender modifier
failfmi Jan 11, 2024
97f3c9d
chore: upgrade libzkp
lastminutedev Jan 16, 2024
4d99b58
fix: add update libzkp dep
lastminutedev Jan 16, 2024
23ef6a1
fix: add fix libzkp cargo deps
lastminutedev Jan 16, 2024
bdd3224
fix(rollup/l2_watcher): missing l1blockhashes sender
failfmi Jan 16, 2024
2395b26
feat(rollup-relayer): latest go-ethereum hash
failfmi Feb 1, 2024
0dd91b6
test(contracts/ScrollChain): examples with 1, 2 and 256 block hashes
failfmi Feb 13, 2024
837173d
test: add ScrollChain commitBatch with 10 chunks with L1BlockHashes
lastminutedev Feb 13, 2024
8088b2e
fix: add update ScrollChain test testCommitBatchWithManyL1BlockHashesTxs
lastminutedev Feb 15, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion build/dockerfiles/coordinator-api.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ COPY --from=zkp-builder /app/target/release/libzktrie.so ./coordinator/internal/
RUN cd ./coordinator && make coordinator_api_skip_libzkp && mv ./build/bin/coordinator_api /bin/coordinator_api && mv internal/logic/verifier/lib /bin/

# Pull coordinator into a second stage deploy alpine container
FROM ubuntu:20.04
FROM --platform=linux/amd64 ubuntu:20.04
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/src/coordinator/internal/logic/verifier/lib
# ENV CHAIN_ID=534353
RUN mkdir -p /src/coordinator/internal/logic/verifier/lib
Expand Down
30 changes: 27 additions & 3 deletions common/types/batch_header.go
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,15 @@ type BatchHeader struct {
dataHash common.Hash
parentBatchHash common.Hash
skippedL1MessageBitmap []byte
lastAppliedL1Block uint64
l1BlockRangeHash common.Hash
}

// NewBatchHeader creates a new BatchHeader
func NewBatchHeader(version uint8, batchIndex, totalL1MessagePoppedBefore uint64, parentBatchHash common.Hash, chunks []*Chunk) (*BatchHeader, error) {
// buffer for storing chunk hashes in order to compute the batch data hash
var dataBytes []byte
var l1BlockRangeHashBytes []byte

// skipped L1 message bitmap, an array of 256-bit bitmaps
var skippedBitmap []*big.Int
Expand All @@ -54,6 +57,7 @@ func NewBatchHeader(version uint8, batchIndex, totalL1MessagePoppedBefore uint64
return nil, err
}
dataBytes = append(dataBytes, chunkHash.Bytes()...)
l1BlockRangeHashBytes = append(l1BlockRangeHashBytes, chunk.L1BlockRangeHash.Bytes()...)

// build skip bitmap
for blockID, block := range chunk.Blocks {
Expand Down Expand Up @@ -93,6 +97,9 @@ func NewBatchHeader(version uint8, batchIndex, totalL1MessagePoppedBefore uint64
// compute data hash
dataHash := crypto.Keccak256Hash(dataBytes)

// compute l1 block range hash
l1BlockRangeHash := crypto.Keccak256Hash(l1BlockRangeHashBytes)

// compute skipped bitmap
bitmapBytes := make([]byte, len(skippedBitmap)*32)
for ii, num := range skippedBitmap {
Expand All @@ -109,6 +116,8 @@ func NewBatchHeader(version uint8, batchIndex, totalL1MessagePoppedBefore uint64
dataHash: dataHash,
parentBatchHash: parentBatchHash,
skippedL1MessageBitmap: bitmapBytes,
lastAppliedL1Block: chunks[len(chunks)-1].LastAppliedL1Block,
l1BlockRangeHash: l1BlockRangeHash,
}, nil
}

Expand All @@ -132,16 +141,29 @@ func (b *BatchHeader) SkippedL1MessageBitmap() []byte {
return b.skippedL1MessageBitmap
}

// LastAppliedL1Block returns the last applied L1 block in the BatchHeader.
func (b *BatchHeader) LastAppliedL1Block() uint64 {
return b.lastAppliedL1Block
}

// L1BlockRangeHash returns the batch L1 block range hash in the BatchHeader.
func (b *BatchHeader) L1BlockRangeHash() common.Hash {
return b.l1BlockRangeHash
}

// Encode encodes the BatchHeader into RollupV2 BatchHeaderV0Codec Encoding.
func (b *BatchHeader) Encode() []byte {
batchBytes := make([]byte, 89+len(b.skippedL1MessageBitmap))
batchBytes := make([]byte, 129+len(b.skippedL1MessageBitmap))
batchBytes[0] = b.version
binary.BigEndian.PutUint64(batchBytes[1:], b.batchIndex)
binary.BigEndian.PutUint64(batchBytes[9:], b.l1MessagePopped)
binary.BigEndian.PutUint64(batchBytes[17:], b.totalL1MessagePopped)
copy(batchBytes[25:], b.dataHash[:])
copy(batchBytes[57:], b.parentBatchHash[:])
copy(batchBytes[57:], b.parentBatchHash[:])
copy(batchBytes[89:], b.skippedL1MessageBitmap[:])
binary.BigEndian.PutUint64(batchBytes[89+len(b.skippedL1MessageBitmap):], b.lastAppliedL1Block)
copy(batchBytes[97+len(b.skippedL1MessageBitmap):], b.l1BlockRangeHash[:])
return batchBytes
}

Expand All @@ -152,7 +174,7 @@ func (b *BatchHeader) Hash() common.Hash {

// DecodeBatchHeader attempts to decode the given byte slice into a BatchHeader.
func DecodeBatchHeader(data []byte) (*BatchHeader, error) {
if len(data) < 89 {
if len(data) < 97 {
return nil, fmt.Errorf("insufficient data for BatchHeader")
}
b := &BatchHeader{
Expand All @@ -162,7 +184,9 @@ func DecodeBatchHeader(data []byte) (*BatchHeader, error) {
totalL1MessagePopped: binary.BigEndian.Uint64(data[17:25]),
dataHash: common.BytesToHash(data[25:57]),
parentBatchHash: common.BytesToHash(data[57:89]),
skippedL1MessageBitmap: data[89:],
skippedL1MessageBitmap: data[89 : len(data)-40],
lastAppliedL1Block: binary.BigEndian.Uint64(data[len(data)-40 : len(data)-32]),
l1BlockRangeHash: common.BytesToHash(data[len(data)-32:]),
}
return b, nil
}
12 changes: 7 additions & 5 deletions common/types/block.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,13 +27,14 @@ func GetMemoryExpansionCost(memoryByteSize uint64) uint64 {
return memoryCost
}

// WrappedBlock contains the block's Header, Transactions and WithdrawTrieRoot hash.
// WrappedBlock contains the block's Header, Transactions, WithdrawTrieRoot hash and LastAppliedL1Block.
type WrappedBlock struct {
Header *types.Header `json:"header"`
// Transactions is only used for recover types.Transactions, the from of types.TransactionData field is missing.
Transactions []*types.TransactionData `json:"transactions"`
WithdrawRoot common.Hash `json:"withdraw_trie_root,omitempty"`
RowConsumption *types.RowConsumption `json:"row_consumption"`
LastAppliedL1Block uint64 `json:"last_applied_l1_block"`
txPayloadLengthCache map[string]uint64
}

Expand Down Expand Up @@ -67,7 +68,7 @@ func (w *WrappedBlock) NumL2Transactions() uint64 {

// Encode encodes the WrappedBlock into RollupV2 BlockContext Encoding.
func (w *WrappedBlock) Encode(totalL1MessagePoppedBefore uint64) ([]byte, error) {
bytes := make([]byte, 60)
bytes := make([]byte, 68)

if !w.Header.Number.IsUint64() {
return nil, errors.New("block number is not uint64")
Expand All @@ -92,6 +93,7 @@ func (w *WrappedBlock) Encode(totalL1MessagePoppedBefore uint64) ([]byte, error)
binary.BigEndian.PutUint64(bytes[48:], w.Header.GasLimit)
binary.BigEndian.PutUint16(bytes[56:], uint16(numTransactions))
binary.BigEndian.PutUint16(bytes[58:], uint16(numL1Messages))
binary.BigEndian.PutUint64(bytes[60:], w.LastAppliedL1Block)

return bytes, nil
}
Expand All @@ -108,7 +110,7 @@ func (w *WrappedBlock) EstimateL1CommitCalldataSize() uint64 {
size += 4 // 4 bytes payload length
size += w.getTxPayloadLength(txData)
}
size += 60 // 60 bytes BlockContext
size += 68 // 68 bytes BlockContext
return size
}

Expand All @@ -128,8 +130,8 @@ func (w *WrappedBlock) EstimateL1CommitGas() uint64 {
total += GetKeccak256Gas(txPayloadLength) // l2 tx hash
}

// 60 bytes BlockContext calldata
total += CalldataNonZeroByteGas * 60
// 68 bytes BlockContext calldata
total += CalldataNonZeroByteGas * 68

// sload
total += 2100 * numL1Messages // numL1Messages times cold sload in L1MessageQueue
Expand Down
13 changes: 9 additions & 4 deletions common/types/chunk.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,9 @@ import (

// Chunk contains blocks to be encoded
type Chunk struct {
Blocks []*WrappedBlock `json:"blocks"`
Blocks []*WrappedBlock `json:"blocks"`
LastAppliedL1Block uint64 `json:"last_applied_l1_block"`
L1BlockRangeHash common.Hash `json:"l1_block_range_hash"`
}

// NumL1Messages returns the number of L1 messages in this chunk.
Expand Down Expand Up @@ -53,8 +55,8 @@ func (c *Chunk) Encode(totalL1MessagePoppedBefore uint64) ([]byte, error) {
}
totalL1MessagePoppedBefore += block.NumL1Messages(totalL1MessagePoppedBefore)

if len(blockBytes) != 60 {
return nil, fmt.Errorf("block encoding is not 60 bytes long %x", len(blockBytes))
if len(blockBytes) != 68 {
return nil, fmt.Errorf("block encoding is not 68 bytes long %x", len(blockBytes))
}

chunkBytes = append(chunkBytes, blockBytes...)
Expand All @@ -77,6 +79,9 @@ func (c *Chunk) Encode(totalL1MessagePoppedBefore uint64) ([]byte, error) {

chunkBytes = append(chunkBytes, l2TxDataBytes...)

binary.BigEndian.PutUint64(chunkBytes, c.LastAppliedL1Block)
chunkBytes = append(chunkBytes, c.L1BlockRangeHash.Bytes()...)

return chunkBytes, nil
}

Expand Down Expand Up @@ -131,7 +136,7 @@ func (c *Chunk) EstimateL1CommitGas() uint64 {
numBlocks := uint64(len(c.Blocks))
totalL1CommitGas += 100 * numBlocks // numBlocks times warm sload
totalL1CommitGas += CalldataNonZeroByteGas // numBlocks field of chunk encoding in calldata
totalL1CommitGas += CalldataNonZeroByteGas * numBlocks * 60 // numBlocks of BlockContext in chunk
totalL1CommitGas += CalldataNonZeroByteGas * numBlocks * 68 // numBlocks of BlockContext in chunk

totalL1CommitGas += GetKeccak256Gas(58*numBlocks + 32*totalTxNum) // chunk hash
return totalL1CommitGas
Expand Down
26 changes: 13 additions & 13 deletions common/types/chunk_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -38,20 +38,20 @@ func TestChunkEncode(t *testing.T) {
wrappedBlock := &WrappedBlock{}
assert.NoError(t, json.Unmarshal(templateBlockTrace, wrappedBlock))
assert.Equal(t, uint64(0), wrappedBlock.NumL1Messages(0))
assert.Equal(t, uint64(298), wrappedBlock.EstimateL1CommitCalldataSize())
assert.Equal(t, uint64(306), wrappedBlock.EstimateL1CommitCalldataSize())
assert.Equal(t, uint64(2), wrappedBlock.NumL2Transactions())
chunk = &Chunk{
Blocks: []*WrappedBlock{
wrappedBlock,
},
}
assert.Equal(t, uint64(0), chunk.NumL1Messages(0))
assert.Equal(t, uint64(6042), chunk.EstimateL1CommitGas())
assert.Equal(t, uint64(6298), chunk.EstimateL1CommitGas())
bytes, err = chunk.Encode(0)
hexString := hex.EncodeToString(bytes)
assert.NoError(t, err)
assert.Equal(t, 299, len(bytes))
assert.Equal(t, "0100000000000000020000000063807b2a0000000000000000000000000000000000000000000000000000000000000000000355418d1e81840002000000000073f87180843b9aec2e8307a12094c0c4c8baea3f6acb49b6e1fb9e2adeceeacb0ca28a152d02c7e14af60000008083019ecea0ab07ae99c67aa78e7ba5cf6781e90cc32b219b1de102513d56548a41e86df514a034cbd19feacd73e8ce64d00c4d1996b9b5243c578fd7f51bfaec288bbaf42a8b00000073f87101843b9aec2e8307a1209401bae6bf68e9a03fb2bc0615b1bf0d69ce9411ed8a152d02c7e14af60000008083019ecea0f039985866d8256f10c1be4f7b2cace28d8f20bde27e2604393eb095b7f77316a05a3e6e81065f2b4604bcec5bd4aba684835996fc3f879380aac1c09c6eed32f1", hexString)
assert.Equal(t, 339, len(bytes))
assert.Equal(t, "0100000000000000020000000063807b2a0000000000000000000000000000000000000000000000000000000000000000000355418d1e818400020000000000000000000000000073f87180843b9aec2e8307a12094c0c4c8baea3f6acb49b6e1fb9e2adeceeacb0ca28a152d02c7e14af60000008083019ecea0ab07ae99c67aa78e7ba5cf6781e90cc32b219b1de102513d56548a41e86df514a034cbd19feacd73e8ce64d00c4d1996b9b5243c578fd7f51bfaec288bbaf42a8b00000073f87101843b9aec2e8307a1209401bae6bf68e9a03fb2bc0615b1bf0d69ce9411ed8a152d02c7e14af60000008083019ecea0f039985866d8256f10c1be4f7b2cace28d8f20bde27e2604393eb095b7f77316a05a3e6e81065f2b4604bcec5bd4aba684835996fc3f879380aac1c09c6eed32f10000000000000000000000000000000000000000000000000000000000000000", hexString)

// Test case 4: when the chunk contains one block with 1 L1MsgTx
templateBlockTrace2, err := os.ReadFile("../testdata/blockTrace_04.json")
Expand All @@ -60,20 +60,20 @@ func TestChunkEncode(t *testing.T) {
wrappedBlock2 := &WrappedBlock{}
assert.NoError(t, json.Unmarshal(templateBlockTrace2, wrappedBlock2))
assert.Equal(t, uint64(11), wrappedBlock2.NumL1Messages(0)) // 0..=9 skipped, 10 included
assert.Equal(t, uint64(96), wrappedBlock2.EstimateL1CommitCalldataSize())
assert.Equal(t, uint64(104), wrappedBlock2.EstimateL1CommitCalldataSize())
assert.Equal(t, uint64(1), wrappedBlock2.NumL2Transactions())
chunk = &Chunk{
Blocks: []*WrappedBlock{
wrappedBlock2,
},
}
assert.Equal(t, uint64(11), chunk.NumL1Messages(0))
assert.Equal(t, uint64(5329), chunk.EstimateL1CommitGas())
assert.Equal(t, uint64(5585), chunk.EstimateL1CommitGas())
bytes, err = chunk.Encode(0)
hexString = hex.EncodeToString(bytes)
assert.NoError(t, err)
assert.Equal(t, 97, len(bytes))
assert.Equal(t, "01000000000000000d00000000646b6e13000000000000000000000000000000000000000000000000000000000000000000000000007a1200000c000b00000020df0b80825dc0941a258d17bf244c4df02d40343a7626a9d321e1058080808080", hexString)
assert.Equal(t, 137, len(bytes))
assert.Equal(t, "01000000000000000d00000000646b6e13000000000000000000000000000000000000000000000000000000000000000000000000007a1200000c000b000000000000000000000020df0b80825dc0941a258d17bf244c4df02d40343a7626a9d321e10580808080800000000000000000000000000000000000000000000000000000000000000000", hexString)

// Test case 5: when the chunk contains two blocks each with 1 L1MsgTx
// TODO: revise this test, we cannot reuse the same L1MsgTx twice
Expand All @@ -84,12 +84,12 @@ func TestChunkEncode(t *testing.T) {
},
}
assert.Equal(t, uint64(11), chunk.NumL1Messages(0))
assert.Equal(t, uint64(10612), chunk.EstimateL1CommitGas())
assert.Equal(t, uint64(11124), chunk.EstimateL1CommitGas())
bytes, err = chunk.Encode(0)
hexString = hex.EncodeToString(bytes)
assert.NoError(t, err)
assert.Equal(t, 193, len(bytes))
assert.Equal(t, "02000000000000000d00000000646b6e13000000000000000000000000000000000000000000000000000000000000000000000000007a1200000c000b000000000000000d00000000646b6e13000000000000000000000000000000000000000000000000000000000000000000000000007a12000001000000000020df0b80825dc0941a258d17bf244c4df02d40343a7626a9d321e105808080808000000020df0b80825dc0941a258d17bf244c4df02d40343a7626a9d321e1058080808080", hexString)
assert.Equal(t, 241, len(bytes))
assert.Equal(t, "02000000000000000d00000000646b6e13000000000000000000000000000000000000000000000000000000000000000000000000007a1200000c000b0000000000000000000000000000000d00000000646b6e13000000000000000000000000000000000000000000000000000000000000000000000000007a120000010000000000000000000000000020df0b80825dc0941a258d17bf244c4df02d40343a7626a9d321e105808080808000000020df0b80825dc0941a258d17bf244c4df02d40343a7626a9d321e10580808080800000000000000000000000000000000000000000000000000000000000000000", hexString)
}

func TestChunkHash(t *testing.T) {
Expand Down Expand Up @@ -129,7 +129,7 @@ func TestChunkHash(t *testing.T) {
}
hash, err = chunk.Hash(0)
assert.NoError(t, err)
assert.Equal(t, "0xaa9e494f72bc6965857856f0fae6916f27b2a6591c714a573b2fab46df03b8ae", hash.Hex())
assert.Equal(t, "0x8d71fbbc486f745ff46ca5d1c0f18ab1f1a1b488e88708034b57d6a1d7fb04ed", hash.Hex())

// Test case 4: successfully hashing a chunk on two blocks each with L1 and L2 txs
templateBlockTrace2, err := os.ReadFile("../testdata/blockTrace_04.json")
Expand All @@ -144,7 +144,7 @@ func TestChunkHash(t *testing.T) {
}
hash, err = chunk.Hash(0)
assert.NoError(t, err)
assert.Equal(t, "0x2eb7dd63bf8fc29a0f8c10d16c2ae6f9da446907c79d50f5c164d30dc8526b60", hash.Hex())
assert.Equal(t, "0x6a47de75ba15fdefa5c8f63a43715f633a0f9559cf07e8bd164ac0cae80300cb", hash.Hex())
}

func TestErrorPaths(t *testing.T) {
Expand Down
27 changes: 20 additions & 7 deletions common/types/message/message.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ import (

"github.com/scroll-tech/go-ethereum/common"
"github.com/scroll-tech/go-ethereum/common/hexutil"
"github.com/scroll-tech/go-ethereum/core/types"
"github.com/scroll-tech/go-ethereum/crypto"
"github.com/scroll-tech/go-ethereum/rlp"
)
Expand Down Expand Up @@ -220,7 +221,10 @@ type TaskMsg struct {

// ChunkTaskDetail is a type containing ChunkTask detail.
type ChunkTaskDetail struct {
BlockHashes []common.Hash `json:"block_hashes"`
BlockHashes []common.Hash `json:"block_hashes"`
PrevLastAppliedL1Block uint64 `json:"prev_last_applied_l1_block"`
LastAppliedL1Block uint64 `json:"last_applied_l1_block"`
L1BlockRangeHash common.Hash `json:"l1_block_range_hash"`
}

// BatchTaskDetail is a type containing BatchTask detail.
Expand Down Expand Up @@ -253,12 +257,14 @@ func (z *ProofDetail) Hash() ([]byte, error) {

// ChunkInfo is for calculating pi_hash for chunk
type ChunkInfo struct {
ChainID uint64 `json:"chain_id"`
PrevStateRoot common.Hash `json:"prev_state_root"`
PostStateRoot common.Hash `json:"post_state_root"`
WithdrawRoot common.Hash `json:"withdraw_root"`
DataHash common.Hash `json:"data_hash"`
IsPadding bool `json:"is_padding"`
ChainID uint64 `json:"chain_id"`
PrevStateRoot common.Hash `json:"prev_state_root"`
PostStateRoot common.Hash `json:"post_state_root"`
WithdrawRoot common.Hash `json:"withdraw_root"`
DataHash common.Hash `json:"data_hash"`
L1BlockRangeHash common.Hash `json:"l1_block_range_hash"`
LastAppliedL1Block uint64 `json:"last_applied_l1_block"`
IsPadding bool `json:"is_padding"`
}

// ChunkProof includes the proof info that are required for chunk verification and rollup.
Expand All @@ -273,6 +279,13 @@ type ChunkProof struct {
GitVersion string `json:"git_version,omitempty"`
}

type ChunkTrace struct {
BlockTraces []*types.BlockTrace `json:"block_traces"`
PrevLastAppliedL1Block uint64 `json:"prev_last_applied_l1_block"`
LastAppliedL1Block uint64 `json:"last_applied_l1_block"`
L1BlockRangeHash common.Hash `json:"l1_block_range_hash"`
}

// BatchProof includes the proof info that are required for batch verification and rollup.
type BatchProof struct {
Proof []byte `json:"proof"`
Expand Down
2 changes: 2 additions & 0 deletions contracts/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,5 @@ L1_DEPLOYER_PRIVATE_KEY=0xabc123abc123abc123abc123abc123abc123abc123abc123abc123
L2_DEPLOYER_PRIVATE_KEY=0xabc123abc123abc123abc123abc123abc123abc123abc123abc123abc123abc1

CHAIN_ID_L2="5343541"

L1_BLOCKS_FIRST_APPLIED="1"
23 changes: 21 additions & 2 deletions contracts/docs/apis/ScrollChain.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Add an account to the sequencer list.
### commitBatch

```solidity
function commitBatch(uint8 _version, bytes _parentBatchHeader, bytes[] _chunks, bytes _skippedL1MessageBitmap) external nonpayable
function commitBatch(uint8 _version, bytes _parentBatchHeader, bytes[] _chunks, bytes _skippedL1MessageBitmap, uint64 _prevLastAppliedL1Block) external nonpayable
```

Commit a batch of transactions on layer 1.
Expand All @@ -60,6 +60,7 @@ Commit a batch of transactions on layer 1.
| _parentBatchHeader | bytes | undefined |
| _chunks | bytes[] | undefined |
| _skippedL1MessageBitmap | bytes | undefined |
| _prevLastAppliedL1Block | uint64 | undefined |

### committedBatches

Expand Down Expand Up @@ -145,7 +146,7 @@ Import layer 2 genesis block
### initialize

```solidity
function initialize(address _messageQueue, address _verifier, uint256 _maxNumTxInChunk) external nonpayable
function initialize(address _messageQueue, address _verifier, uint256 _maxNumTxInChunk, address _l1ViewOracle) external nonpayable
```


Expand All @@ -159,6 +160,7 @@ function initialize(address _messageQueue, address _verifier, uint256 _maxNumTxI
| _messageQueue | address | undefined |
| _verifier | address | undefined |
| _maxNumTxInChunk | uint256 | undefined |
| _l1ViewOracle | address | undefined |

### isBatchFinalized

Expand Down Expand Up @@ -226,6 +228,23 @@ Whether an account is a sequencer.
|---|---|---|
| _0 | bool | undefined |

### l1ViewOracle

```solidity
function l1ViewOracle() external view returns (address)
```

The address of L1ViewOracle.




#### Returns

| Name | Type | Description |
|---|---|---|
| _0 | address | undefined |

### lastFinalizedBatchIndex

```solidity
Expand Down
Loading