Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: remove errors directive comparison #10502

Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion cannon/mipsevm/memory.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ package mipsevm
import (
"encoding/binary"
"encoding/json"
"errors"
"fmt"
"io"
"math/bits"
Expand Down Expand Up @@ -278,7 +279,7 @@ func (m *Memory) SetMemoryRange(addr uint32, r io.Reader) error {
p.InvalidateFull()
n, err := r.Read(p.Data[pageAddr:])
if err != nil {
if err == io.EOF {
if errors.Is(err, io.EOF) {
return nil
}
return err
Expand Down
3 changes: 2 additions & 1 deletion cannon/mipsevm/page.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ import (
"encoding/base64"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"io"
"sync"
Expand Down Expand Up @@ -47,7 +48,7 @@ func (p *Page) UnmarshalJSON(dat []byte) error {
defer r.Close()
if n, err := r.Read(p[:]); n != PageSize {
return fmt.Errorf("epxeted %d bytes, but got %d", PageSize, n)
} else if err == io.EOF {
} else if errors.Is(err, io.EOF) {
return nil
} else {
return err
Expand Down
6 changes: 3 additions & 3 deletions op-batcher/batcher/channel_builder.go
Original file line number Diff line number Diff line change
Expand Up @@ -300,7 +300,7 @@ func (c *ChannelBuilder) outputReadyFrames() error {
// When creating a frame from the ready compression data, the frame overhead
// will be added to the total output size, so we can add it in the condition.
for c.co.ReadyBytes()+derive.FrameV0OverHeadSize >= int(c.cfg.MaxFrameSize) {
if err := c.outputFrame(); err == io.EOF {
if err := c.outputFrame(); errors.Is(err, io.EOF) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ensure proper error handling in outputReadyFrames.

- if err := c.outputFrame(); errors.Is(err, io.EOF) {
+ if err := c.outputFrame(); err != nil && !errors.Is(err, io.EOF) {

The original code does not handle errors that are not io.EOF. This change ensures that all errors are properly handled.


Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
if err := c.outputFrame(); errors.Is(err, io.EOF) {
if err := c.outputFrame(); err != nil && !errors.Is(err, io.EOF) {

return nil
} else if err != nil {
return err
Expand All @@ -315,7 +315,7 @@ func (c *ChannelBuilder) closeAndOutputAllFrames() error {
}

for {
if err := c.outputFrame(); err == io.EOF {
if err := c.outputFrame(); errors.Is(err, io.EOF) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ensure proper error handling in closeAndOutputAllFrames.

- if err := c.outputFrame(); errors.Is(err, io.EOF) {
+ if err := c.outputFrame(); err != nil && !errors.Is(err, io.EOF) {

Similar to the previous comment, this ensures that all errors are properly handled, not just io.EOF.


Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
if err := c.outputFrame(); errors.Is(err, io.EOF) {
if err := c.outputFrame(); err != nil && !errors.Is(err, io.EOF) {

return nil
} else if err != nil {
return err
Expand All @@ -329,7 +329,7 @@ func (c *ChannelBuilder) closeAndOutputAllFrames() error {
func (c *ChannelBuilder) outputFrame() error {
var buf bytes.Buffer
fn, err := c.co.OutputFrame(&buf, c.cfg.MaxFrameSize)
if err != io.EOF && err != nil {
if !errors.Is(err, io.EOF) && err != nil {
return fmt.Errorf("writing frame[%d]: %w", fn, err)
}

Expand Down
3 changes: 2 additions & 1 deletion op-batcher/batcher/channel_manager_test.go
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
package batcher

import (
"errors"
"io"
"math/big"
"math/rand"
Expand Down Expand Up @@ -409,7 +410,7 @@ func ChannelManagerCloseAllTxsFailed(t *testing.T, batchType uint) {
drainTxData := func() (txdatas []txData) {
for {
txdata, err := m.TxData(eth.BlockID{})
if err == io.EOF {
if errors.Is(err, io.EOF) {
return
}
require.NoError(err, "Expected channel manager to produce valid tx data")
Expand Down
4 changes: 2 additions & 2 deletions op-batcher/batcher/driver.go
Original file line number Diff line number Diff line change
Expand Up @@ -376,7 +376,7 @@ func (l *BatchSubmitter) publishStateToL1(queue *txmgr.Queue[txID], receiptsCh c
}
err := l.publishTxToL1(l.killCtx, queue, receiptsCh)
if err != nil {
if err != io.EOF {
if !errors.Is(err, io.EOF) {
l.Log.Error("error publishing tx to l1", "err", err)
}
return
Expand Down Expand Up @@ -436,7 +436,7 @@ func (l *BatchSubmitter) publishTxToL1(ctx context.Context, queue *txmgr.Queue[t
// Collect next transaction data
txdata, err := l.state.TxData(l1tip.ID())

if err == io.EOF {
if errors.Is(err, io.EOF) {
l.Log.Trace("no transaction data available")
return err
} else if err != nil {
Expand Down
2 changes: 1 addition & 1 deletion op-chain-ops/cmd/receipt-reference-builder/pull.go
Original file line number Diff line number Diff line change
Expand Up @@ -335,7 +335,7 @@ func batchBlockByNumber(ctx context.Context, c *ethclient.Client, blockNumbers [
int(MaxBatchSize),
)
for {
if err := batchReq.Fetch(ctx); err == io.EOF {
if err := batchReq.Fetch(ctx); errors.Is(err, io.EOF) {
break
} else if err != nil {
log.Warn("Failed to Fetch Blocks", "Err", err, "Start", blockNumbers[0], "End", blockNumbers[len(blockNumbers)-1])
Expand Down
7 changes: 4 additions & 3 deletions op-e2e/actions/l2_batcher.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ import (
"context"
"crypto/ecdsa"
"crypto/rand"
"errors"
"io"
"math/big"

Expand Down Expand Up @@ -239,7 +240,7 @@ func (s *L2Batcher) ActL2BatchSubmit(t Testing, txOpts ...func(tx *types.Dynamic
data := new(bytes.Buffer)
data.WriteByte(derive.DerivationVersion0)
// subtract one, to account for the version byte
if _, err := s.l2ChannelOut.OutputFrame(data, s.l2BatcherCfg.MaxL1TxSize-1); err == io.EOF {
if _, err := s.l2ChannelOut.OutputFrame(data, s.l2BatcherCfg.MaxL1TxSize-1); errors.Is(err, io.EOF) {
s.l2ChannelOut = nil
s.l2Submitting = false
} else if err != nil {
Expand Down Expand Up @@ -342,7 +343,7 @@ func (s *L2Batcher) ActL2BatchSubmitMultiBlob(t Testing, numBlobs int) {
// subtract one, to account for the version byte
l = s.l2BatcherCfg.MaxL1TxSize - 1
}
if _, err := s.l2ChannelOut.OutputFrame(data, l); err == io.EOF {
if _, err := s.l2ChannelOut.OutputFrame(data, l); errors.Is(err, io.EOF) {
s.l2Submitting = false
if i < numBlobs-1 {
t.Fatalf("failed to fill up %d blobs, only filled %d", numBlobs, i+1)
Expand Down Expand Up @@ -410,7 +411,7 @@ func (s *L2Batcher) ActL2BatchSubmitGarbage(t Testing, kind GarbageKind) {
data.WriteByte(derive.DerivationVersion0)

// subtract one, to account for the version byte
if _, err := s.l2ChannelOut.OutputFrame(data, s.l2BatcherCfg.MaxL1TxSize-1); err == io.EOF {
if _, err := s.l2ChannelOut.OutputFrame(data, s.l2BatcherCfg.MaxL1TxSize-1); errors.Is(err, io.EOF) {
s.l2ChannelOut = nil
s.l2Submitting = false
} else if err != nil {
Expand Down
2 changes: 1 addition & 1 deletion op-e2e/actions/l2_verifier.go
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,7 @@ func (s *L2Verifier) ActL2PipelineStep(t Testing) {

s.l2PipelineIdle = false
err := s.derivation.Step(t.Ctx())
if err == io.EOF || (err != nil && errors.Is(err, derive.EngineELSyncing)) {
if errors.Is(err, io.EOF) || (err != nil && errors.Is(err, derive.EngineELSyncing)) {
s.l2PipelineIdle = true
return
} else if err != nil && errors.Is(err, derive.NotEnoughData) {
Expand Down
3 changes: 2 additions & 1 deletion op-node/cmd/batch_decoder/reassemble/reassemble.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ package reassemble

import (
"encoding/json"
"errors"
"fmt"
"io"
"log"
Expand Down Expand Up @@ -113,7 +114,7 @@ func processFrames(cfg Config, rollupCfg *rollup.Config, id derive.ChannelID, fr
if ch.IsReady() {
br, err := derive.BatchReader(ch.Reader(), spec.MaxRLPBytesPerChannel(ch.HighestBlock().Time))
if err == nil {
for batchData, err := br(); err != io.EOF; batchData, err = br() {
for batchData, err := br(); !errors.Is(err, io.EOF); batchData, err = br() {
if err != nil {
fmt.Printf("Error reading batchData for channel %v. Err: %v\n", id.String(), err)
invalidBatches = true
Expand Down
3 changes: 2 additions & 1 deletion op-node/p2p/store/ip_ban_book.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ package store
import (
"context"
"encoding/json"
"errors"
"net"
"time"

Expand Down Expand Up @@ -71,7 +72,7 @@ func (d *ipBanBook) startGC() {

func (d *ipBanBook) GetIPBanExpiration(ip net.IP) (time.Time, error) {
rec, err := d.book.getRecord(ip.To16().String())
if err == UnknownRecordErr {
if errors.Is(err, UnknownRecordErr) {
return time.Time{}, UnknownBanErr
}
if err != nil {
Expand Down
3 changes: 2 additions & 1 deletion op-node/p2p/store/mdbook.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ package store
import (
"context"
"encoding/json"
"errors"
"sync/atomic"
"time"

Expand Down Expand Up @@ -69,7 +70,7 @@ func (m *metadataBook) startGC() {
func (m *metadataBook) GetPeerMetadata(id peer.ID) (PeerMetadata, error) {
record, err := m.book.getRecord(id)
// If the record is not found, return an empty PeerMetadata
if err == UnknownRecordErr {
if errors.Is(err, UnknownRecordErr) {
return PeerMetadata{}, nil
}
if err != nil {
Expand Down
3 changes: 2 additions & 1 deletion op-node/p2p/store/peer_ban_book.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ package store
import (
"context"
"encoding/json"
"errors"
"time"

"github.com/ethereum-optimism/optimism/op-service/clock"
Expand Down Expand Up @@ -67,7 +68,7 @@ func (d *peerBanBook) startGC() {

func (d *peerBanBook) GetPeerBanExpiration(id peer.ID) (time.Time, error) {
rec, err := d.book.getRecord(id)
if err == UnknownRecordErr {
if errors.Is(err, UnknownRecordErr) {
return time.Time{}, UnknownBanErr
}
if err != nil {
Expand Down
2 changes: 1 addition & 1 deletion op-node/p2p/store/records_book.go
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ func (d *recordsBook[K, V]) SetRecord(key K, diff recordDiff[V]) (V, error) {
d.Lock()
defer d.Unlock()
rec, err := d.getRecord(key)
if err == UnknownRecordErr { // instantiate new record if it does not exist yet
if errors.Is(err, UnknownRecordErr) { // instantiate new record if it does not exist yet
rec = d.newRecord()
} else if err != nil {
return d.newRecord(), err
Expand Down
3 changes: 2 additions & 1 deletion op-node/p2p/store/scorebook.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ package store

import (
"context"
"errors"
"sync/atomic"
"time"

Expand Down Expand Up @@ -71,7 +72,7 @@ func (d *scoreBook) startGC() {

func (d *scoreBook) GetPeerScores(id peer.ID) (PeerScores, error) {
record, err := d.book.getRecord(id)
if err == UnknownRecordErr {
if errors.Is(err, UnknownRecordErr) {
return PeerScores{}, nil // return zeroed scores by default
}
if err != nil {
Expand Down
6 changes: 3 additions & 3 deletions op-node/rollup/derive/batch_queue.go
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ func (bq *BatchQueue) NextBatch(ctx context.Context, parent eth.L2BlockRef) (*Si

// Load more data into the batch queue
outOfData := false
if batch, err := bq.prev.NextBatch(ctx); err == io.EOF {
if batch, err := bq.prev.NextBatch(ctx); errors.Is(err, io.EOF) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ensure proper error handling in NextBatch.

- if batch, err := bq.prev.NextBatch(ctx); errors.Is(err, io.EOF) {
+ if batch, err := bq.prev.NextBatch(ctx); err != nil && !errors.Is(err, io.EOF) {

This change ensures that all errors are properly handled, not just io.EOF.


Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
if batch, err := bq.prev.NextBatch(ctx); errors.Is(err, io.EOF) {
if batch, err := bq.prev.NextBatch(ctx); err != nil && !errors.Is(err, io.EOF) {

outOfData = true
} else if err != nil {
return nil, false, err
Expand All @@ -168,9 +168,9 @@ func (bq *BatchQueue) NextBatch(ctx context.Context, parent eth.L2BlockRef) (*Si

// Finally attempt to derive more batches
batch, err := bq.deriveNextBatch(ctx, outOfData, parent)
if err == io.EOF && outOfData {
if errors.Is(err, io.EOF) && outOfData {
return nil, false, io.EOF
} else if err == io.EOF {
} else if errors.Is(err, io.EOF) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ensure proper error handling in deriveNextBatch.

- } else if errors.Is(err, io.EOF) {
+ } else if err != nil && !errors.Is(err, io.EOF) {

This ensures that all errors are properly handled, not just io.EOF.


Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
} else if errors.Is(err, io.EOF) {
} else if err != nil && !errors.Is(err, io.EOF) {

return nil, false, NotEnoughData
} else if err != nil {
return nil, false, err
Expand Down
5 changes: 3 additions & 2 deletions op-node/rollup/derive/channel_bank.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ package derive

import (
"context"
"errors"
"io"
"slices"

Expand Down Expand Up @@ -180,7 +181,7 @@ func (cb *ChannelBank) tryReadChannelAtIndex(i int) (data []byte, err error) {
func (cb *ChannelBank) NextData(ctx context.Context) ([]byte, error) {
// Do the read from the channel bank first
data, err := cb.Read()
if err == io.EOF {
if errors.Is(err, io.EOF) {
// continue - We will attempt to load data into the channel bank
} else if err != nil {
return nil, err
Expand All @@ -189,7 +190,7 @@ func (cb *ChannelBank) NextData(ctx context.Context) ([]byte, error) {
}

// Then load data into the channel bank
if frame, err := cb.prev.NextFrame(ctx); err == io.EOF {
if frame, err := cb.prev.NextFrame(ctx); errors.Is(err, io.EOF) {
return nil, io.EOF
} else if err != nil {
return nil, err
Expand Down
5 changes: 3 additions & 2 deletions op-node/rollup/derive/channel_in_reader.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ package derive
import (
"bytes"
"context"
"errors"
"fmt"
"io"

Expand Down Expand Up @@ -65,7 +66,7 @@ func (cr *ChannelInReader) NextChannel() {
// It will return a temporary error if it needs to be called again to advance some internal state.
func (cr *ChannelInReader) NextBatch(ctx context.Context) (Batch, error) {
if cr.nextBatchFn == nil {
if data, err := cr.prev.NextData(ctx); err == io.EOF {
if data, err := cr.prev.NextData(ctx); errors.Is(err, io.EOF) {
return nil, io.EOF
} else if err != nil {
return nil, err
Expand All @@ -79,7 +80,7 @@ func (cr *ChannelInReader) NextBatch(ctx context.Context) (Batch, error) {
// TODO: can batch be non nil while err == io.EOF
// This depends on the behavior of rlp.Stream
batchData, err := cr.nextBatchFn()
if err == io.EOF {
if errors.Is(err, io.EOF) {
cr.NextChannel()
return nil, NotEnoughData
} else if err != nil {
Expand Down
4 changes: 2 additions & 2 deletions op-node/rollup/derive/engine_queue.go
Original file line number Diff line number Diff line change
Expand Up @@ -304,7 +304,7 @@ func (eq *EngineQueue) Step(ctx context.Context) error {
// Trying unsafe payload should be done before safe attributes
// It allows the unsafe head can move forward while the long-range consolidation is in progress.
if eq.unsafePayloads.Len() > 0 {
if err := eq.tryNextUnsafePayload(ctx); err != io.EOF {
if err := eq.tryNextUnsafePayload(ctx); !errors.Is(err, io.EOF) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ensure proper error handling for non-EOF errors in tryNextUnsafePayload.

-		if err := eq.tryNextUnsafePayload(ctx); !errors.Is(err, io.EOF) {
+		if err := eq.tryNextUnsafePayload(ctx); err != nil && !errors.Is(err, io.EOF) {
			return err
		}

The current implementation only handles the case where the error is not io.EOF, potentially missing other critical errors that should also halt the execution. The suggested change ensures that any error, except io.EOF, is properly handled.


Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
if err := eq.tryNextUnsafePayload(ctx); !errors.Is(err, io.EOF) {
if err := eq.tryNextUnsafePayload(ctx); err != nil && !errors.Is(err, io.EOF) {
return err
}

return err
}
// EOF error means we can't process the next unsafe payload. Then we should process next safe attributes.
Expand All @@ -331,7 +331,7 @@ func (eq *EngineQueue) Step(ctx context.Context) error {
if err := eq.tryFinalizePastL2Blocks(ctx); err != nil {
return err
}
if next, err := eq.prev.NextAttributes(ctx, eq.ec.PendingSafeL2Head()); err == io.EOF {
if next, err := eq.prev.NextAttributes(ctx, eq.ec.PendingSafeL2Head()); errors.Is(err, io.EOF) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Handle potential io.EOF error correctly in Step method.

-	if next, err := eq.prev.NextAttributes(ctx, eq.ec.PendingSafeL2Head()); errors.Is(err, io.EOF) {
+	if next, err := eq.prev.NextAttributes(ctx, eq.ec.PendingSafeL2Head()); err != nil {
+		if errors.Is(err, io.EOF) {
			return io.EOF
+		}
+		return err
	} else if err != nil {
		return err
	} else {

The original code does not handle other errors that might occur when fetching the next attributes. This change ensures that all errors are handled appropriately, not just io.EOF.


Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
if next, err := eq.prev.NextAttributes(ctx, eq.ec.PendingSafeL2Head()); errors.Is(err, io.EOF) {
if next, err := eq.prev.NextAttributes(ctx, eq.ec.PendingSafeL2Head()); err != nil {
if errors.Is(err, io.EOF) {
return io.EOF
}
return err
} else if err != nil {
return err
} else {

return io.EOF
} else if err != nil {
return err
Expand Down
5 changes: 3 additions & 2 deletions op-node/rollup/derive/l1_retrieval.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ package derive

import (
"context"
"errors"
"fmt"
"io"

Expand Down Expand Up @@ -49,7 +50,7 @@ func (l1r *L1Retrieval) Origin() eth.L1BlockRef {
func (l1r *L1Retrieval) NextData(ctx context.Context) ([]byte, error) {
if l1r.datas == nil {
next, err := l1r.prev.NextL1Block(ctx)
if err == io.EOF {
if errors.Is(err, io.EOF) {
return nil, io.EOF
} else if err != nil {
return nil, err
Expand All @@ -61,7 +62,7 @@ func (l1r *L1Retrieval) NextData(ctx context.Context) ([]byte, error) {

l1r.log.Debug("fetching next piece of data")
data, err := l1r.datas.Next(ctx)
if err == io.EOF {
if errors.Is(err, io.EOF) {
l1r.datas = nil
return nil, io.EOF
} else if err != nil {
Expand Down
4 changes: 2 additions & 2 deletions op-node/rollup/derive/pipeline.go
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ func (dp *DerivationPipeline) Step(ctx context.Context) error {

// if any stages need to be reset, do that first.
if dp.resetting < len(dp.stages) {
if err := dp.stages[dp.resetting].Reset(ctx, dp.eng.Origin(), dp.eng.SystemConfig()); err == io.EOF {
if err := dp.stages[dp.resetting].Reset(ctx, dp.eng.Origin(), dp.eng.SystemConfig()); errors.Is(err, io.EOF) {
dp.log.Debug("reset of stage completed", "stage", dp.resetting, "origin", dp.eng.Origin())
dp.resetting += 1
return nil
Expand All @@ -174,7 +174,7 @@ func (dp *DerivationPipeline) Step(ctx context.Context) error {
}

// Now step the engine queue. It will pull earlier data as needed.
if err := dp.eng.Step(ctx); err == io.EOF {
if err := dp.eng.Step(ctx); errors.Is(err, io.EOF) {
// If every stage has returned io.EOF, try to advance the L1 Origin
return dp.traversal.AdvanceL1Block(ctx)
} else if errors.Is(err, EngineELSyncing) {
Expand Down
3 changes: 2 additions & 1 deletion op-node/rollup/derive/plasma_data_source_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ package derive

import (
"context"
"errors"
"io"
"math/big"
"math/rand"
Expand Down Expand Up @@ -260,7 +261,7 @@ func TestPlasmaDataSource(t *testing.T) {
break
}

for data, err := src.Next(ctx); err != io.EOF; data, err = src.Next(ctx) {
for data, err := src.Next(ctx); !errors.Is(err, io.EOF); data, err = src.Next(ctx) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tip

Codebase Verification

The verification process has revealed that there are multiple instances across the codebase where error handling does not use the errors.Is() function, contrary to the best practices mentioned in the review comment. These instances are spread across various files and modules, indicating that the update to use errors.Is() for error comparison is not consistently applied throughout the project.

Locations needing attention:

  • proxyd/consensus_tracker.go
  • op-ufm/pkg/provider/roundtrip.go
  • op-ufm/pkg/provider/heartbeat.go
  • op-node/rollup/derive/engine_queue.go
  • op-node/rollup/derive/batch_queue_test.go
  • op-node/rollup/driver/sequencer_test.go
  • op-node/cmd/batch_decoder/reassemble/reassemble.go
  • op-e2e/e2eutils/*
  • op-conductor/conductor/service.go
  • op-challenger/game/scheduler/coordinator.go
  • op-chain-ops/cmd/op-simulate/main.go
  • op-batcher/batcher/*

These files contain direct comparisons of errors without using errors.Is(), which should be refactored to align with the updated error handling strategy.

Analysis chain

Use of errors.Is() for error comparison is a best practice in Go, especially for handling wrapped errors. Good implementation.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Verify that all instances of error handling in the project have been updated to use `errors.Is()`.

# Test: Search for direct error comparisons. Expect: No occurrences of direct error comparisons.
rg --type go '!errors.Is' --glob '*.go'

Length of output: 2982

logger.Info("yielding data")
// check that each commitment is resolved
require.NoError(t, err)
Expand Down
Loading