Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I run the model on my own dataset? #33

Open
kurdt23 opened this issue Sep 22, 2024 · 7 comments
Open

How can I run the model on my own dataset? #33

kurdt23 opened this issue Sep 22, 2024 · 7 comments

Comments

@kurdt23
Copy link

kurdt23 commented Sep 22, 2024

I have my own image dataset and I don't understand where/how I can make key point masks for images. Can you explain the process to prepare the own dataset and then run the model, please? I'll be waiting your answer.

@htcr
Copy link
Owner

htcr commented Sep 24, 2024

Hi, does your dataset contain ground-truth graphs? You can process that graph into the same format as used in this codebase, then it shall generate keypoint masks for you.

@htcr
Copy link
Owner

htcr commented Sep 24, 2024

For example, you can refer to the generate_labels.py under cityscales dir. From line 82, it reads the GT graph - and it reveals the gt graph format, basically an adjacency list of vertices.

@kurdt23
Copy link
Author

kurdt23 commented Sep 24, 2024

Unfortunately my dataset does not contain ground-truth graphs. I have only satellite images of 512x512 roads with different magnification in tif format. I attach some examples of these images. Can you please help me, how can I get ground-truth graphs from my images?

@kurdt23 kurdt23 closed this as completed Sep 24, 2024
@kurdt23 kurdt23 reopened this Sep 24, 2024
@htcr
Copy link
Owner

htcr commented Sep 25, 2024

Ok, if you were trying to re-train/fine-tune on your dataset, you do need ground-truth graph. Otherwise if you just want to run our checkpoint on some images, you can just follow the inference instruction in README.

@kurdt23
Copy link
Author

kurdt23 commented Sep 25, 2024

Thank you! I have checkpoints on City scale and SpaceNet datasets, I'll try some of them on my images.

@kurdt23
Copy link
Author

kurdt23 commented Oct 9, 2024

Hello, I try to start inference on my own images via command:
python3 inferencer.py --config=config/toponet_vitb_512_ekb.yaml --checkpoint=lightning_logs/vhfsw197/checkpoints/epoch=9-step=25000.ckpt

But then I got an error:

##### Loading Trained CKPT lightning_logs/vhfsw197/checkpoints/epoch=9-step=25000.ckpt #####
Traceback (most recent call last):
  File "/misc/home6/s0181/sam_road/inferencer.py", line 281, in <module>
    for img_id in test_img_indices:
NameError: name 'test_img_indices' is not defined

line 281:

for img_id in test_img_indices:
        print(f'Processing {img_id}')
        # [H, W, C] RGB
        img = read_rgb_img(rgb_pattern.format(img_id))
        start_seconds = time.time()

and e.t.c.

I think the erorr is sampling with the dataset name. But for cityscale and spacenet need gt graph, which I don't have for my images.

if config.DATASET == 'cityscale':
        _, _, test_img_indices = cityscale_data_partition()
        rgb_pattern = './cityscale/20cities/region_{}_sat.png'
        gt_graph_pattern = 'cityscale/20cities/region_{}_graph_gt.pickle'
    elif config.DATASET == 'spacenet':
        _, _, test_img_indices = spacenet_data_partition()
        rgb_pattern = './spacenet/RGB_1.0_meter/{}__rgb.png'
        gt_graph_pattern = './spacenet/RGB_1.0_meter/{}__gt_graph.p'

I used the configuration reference from your toponet_vitb_1024.yaml
toponet_vitb_512_ekb.yaml:

SAM_VERSION: 'vit_b'
SAM_CKPT_PATH: 'sam_ckpts/sam_vit_b_01ec64.pth'
PATCH_SIZE: 512
BATCH_SIZE: 16
DATA_WORKER_NUM: 1
TRAIN_EPOCHS: 10
BASE_LR: 0.001
FREEZE_ENCODER: False
ENCODER_LR_FACTOR: 0.1
ENCODER_LORA: False
FOCAL_LOSS: False
USE_SAM_DECODER: False

# TOPONET
# sample per patch
TOPO_SAMPLE_NUM: 256

# Inference
INFER_BATCH_SIZE: 64
SAMPLE_MARGIN: 64
INFER_PATCHES_PER_EDGE: 16

# [0, 255]
ITSC_THRESHOLD: 128
ROAD_THRESHOLD: 128
# pixels
ITSC_NMS_RADIUS: 8
ROAD_NMS_RADIUS: 16
NEIGHBOR_RADIUS: 64
MAX_NEIGHBOR_QUERIES: 16

Please, can you explane me what I need to do to solve the problem?

@kurdt23
Copy link
Author

kurdt23 commented Nov 6, 2024

Hello, I try to start inference on my own images via command: python3 inferencer.py --config=config/toponet_vitb_512_ekb.yaml --checkpoint=lightning_logs/vhfsw197/checkpoints/epoch=9-step=25000.ckpt

and e.t.c.
Please, can you explane me what I need to do to solve the problem?

This problem has already been solved. I understand correctly that even to run inferencer.py you still need GTs?
Could you please write some instructions to run inferencer.py on arbitrary images on a trained model?
Because I managed to run only with GT from SityScale dataset (gt.p files) (I just copied random ones and renamed them for my pictures), without GT inferencer.py on random pictures it doesn't run.

inferencer.py and GT

gt_graph_path = gt_graph_pattern.format(img_id)
        gt_graph = pickle.load(open(gt_graph_path, "rb"))
        gt_nodes, gt_edges = graph_utils.convert_from_sat2graph_format(gt_graph)
        if len(gt_nodes) == 0:
            gt_nodes = np.zeros([0, 2], dtype=np.float32)
...
large_map_sat2graph_format = graph_utils.convert_to_sat2graph_format(pred_nodes, pred_edges)
        graph_save_dir = os.path.join(output_dir, 'graph')
        if not os.path.exists(graph_save_dir):
            os.makedirs(graph_save_dir)
        graph_save_path = os.path.join(graph_save_dir, f'{img_id}.p')
        with open(graph_save_path, 'wb') as file:
            pickle.dump(large_map_sat2graph_format, file) 

If i commit gt_graph, gt_nodes, gt_edges it also run, but how I can then extract metrics on my own images wich hasn't gt.p and gt.json files? Or on another popular dataset like DeepGlobe Roads or Massachusetts Roads wich also hasn't gt.p and gt.json files?

And I trouble then I continue traning model on CityScale. --resume just not working at all, it's start from beggining.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants