Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix autocast to support global tensor #10605

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open

Conversation

fpzh2011
Copy link
Contributor

oneflow autocast 不支持 global tensor。下面代码直接报错

import oneflow as flow

placement = flow.placement("cuda", ranks=[0])
sbp = flow.sbp.broadcast
a = flow.randn(2, 3).to_global(placement=placement, sbp=sbp)
b = flow.randn(3, 4).to_global(placement=placement, sbp=sbp)
with flow.autocast(device_type="cuda"):
    c = flow.matmul(a, b)

Copy link
Contributor

Copy link
Contributor

Speed stats:
GPU Name: NVIDIA GeForce RTX 3080 Ti 

❌ OneFlow resnet50 time: 43.2ms (= 4322.0ms / 100, input_shape=[16, 3, 224, 224])
PyTorch resnet50 time: 57.3ms (= 5733.8ms / 100, input_shape=[16, 3, 224, 224])
✔️ Relative speed: 1.33 (= 57.3ms / 43.2ms)

OneFlow resnet50 time: 26.5ms (= 2646.1ms / 100, input_shape=[8, 3, 224, 224])
PyTorch resnet50 time: 37.1ms (= 3710.4ms / 100, input_shape=[8, 3, 224, 224])
✔️ Relative speed: 1.40 (= 37.1ms / 26.5ms)

OneFlow resnet50 time: 17.8ms (= 3552.6ms / 200, input_shape=[4, 3, 224, 224])
PyTorch resnet50 time: 35.3ms (= 7052.8ms / 200, input_shape=[4, 3, 224, 224])
✔️ Relative speed: 1.99 (= 35.3ms / 17.8ms)

OneFlow resnet50 time: 15.5ms (= 3096.9ms / 200, input_shape=[2, 3, 224, 224])
PyTorch resnet50 time: 31.4ms (= 6279.5ms / 200, input_shape=[2, 3, 224, 224])
✔️ Relative speed: 2.03 (= 31.4ms / 15.5ms)

OneFlow resnet50 time: 15.0ms (= 3004.1ms / 200, input_shape=[1, 3, 224, 224])
PyTorch resnet50 time: 28.7ms (= 5731.9ms / 200, input_shape=[1, 3, 224, 224])
✔️ Relative speed: 1.91 (= 28.7ms / 15.0ms)

OneFlow swin dataloader time: 0.199s (= 39.777s / 200, num_workers=1)
PyTorch swin dataloader time: 0.128s (= 25.690s / 200, num_workers=1)
Relative speed: 0.646 (= 0.128s / 0.199s)

OneFlow swin dataloader time: 0.057s (= 11.333s / 200, num_workers=4)
PyTorch swin dataloader time: 0.033s (= 6.639s / 200, num_workers=4)
Relative speed: 0.586 (= 0.033s / 0.057s)

OneFlow swin dataloader time: 0.038s (= 7.618s / 200, num_workers=8)
PyTorch swin dataloader time: 0.022s (= 4.346s / 200, num_workers=8)
Relative speed: 0.571 (= 0.022s / 0.038s)

❌ OneFlow resnet50 time: 49.3ms (= 4926.9ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 65.5ms (= 6554.8ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.33 (= 65.5ms / 49.3ms)

OneFlow resnet50 time: 36.8ms (= 3684.1ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 46.5ms (= 4647.9ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.26 (= 46.5ms / 36.8ms)

OneFlow resnet50 time: 28.4ms (= 5670.9ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 39.9ms (= 7987.3ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.41 (= 39.9ms / 28.4ms)

OneFlow resnet50 time: 25.7ms (= 5141.0ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 39.6ms (= 7916.6ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.54 (= 39.6ms / 25.7ms)

OneFlow resnet50 time: 24.7ms (= 4945.4ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 37.4ms (= 7477.2ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.51 (= 37.4ms / 24.7ms)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants