-
Notifications
You must be signed in to change notification settings - Fork 28.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enable tp on CPU #36299
base: main
Are you sure you want to change the base?
enable tp on CPU #36299
Conversation
Is there a reason we want to support TP on CPU? I assumed it would mainly be useful for multi-GPU nodes. |
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
Intel Xeon CPU has multi numa nodes which means we can implement TP model and each part on a NUMA node. Currently we can enable the function and select Besides, we should always make sure that CPU device cannot assign index. |
In that case this change makes sense to me, but maybe we should just raise an error saying that TP on CPU is not supported yet, rather than setting index to |
Actually the TP functionality is ready on CPU, just run with the following codes: CMD: import os
import torch.distributed as dist
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModel
import time
import torch
import torch
import os
model_id = "meta-llama/Llama-3.1-8B-Instruct"
def main(is_tp, rank, world_size) -> None:
backend = "ccl"
print(is_tp)
if is_tp:
dist.init_process_group(backend)
model_kwargs = dict(torch_dtype=torch.bfloat16)
if is_tp:
model_kwargs["tp_plan"] = "auto"
else:
model_kwargs["device_map"] = "cpu"
# Retrieve tensor parallel model
model = AutoModel.from_pretrained(model_id, **model_kwargs)
print(model.dtype)
# Prepare input tokens
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Can I help" * 200
inputs = tokenizer(prompt, return_tensors="pt", max_length=512).input_ids.to(model.device)
print(f"inpu shape is {inputs.shape}")
# model = torch.compile(model)
# warm-up
dist.barrier()
for i in range(5):
outputs = model(inputs)
dist.barrier()
for i in range(5):
with torch.no_grad():
start = time.time()
outputs = model(inputs)
end = time.time()
print(f"time cost {(end-start)*1000} ms")
print(outputs)
if __name__ == "__main__":
rank = int(os.environ["RANK"]) if "RANK" in os.environ else 0
world_size = int(os.environ["WORLD_SIZE"]) if "WORLD_SIZE" in os.environ else 1
is_tp = "RANK" in os.environ
main(is_tp, rank, world_size) |
Hey @jiqing-feng! The TP code is going to change quite a bit in the near future as we work to improve loading efficiency, so it would be best to put this issue on hold for now and revisit afterwards 🤗 |
OK, but I suppose this change is really tiny to not impact the refactor. It's okay to wait for your refactor. |
Hi @SunMarc @Rocketknight1 @Cyrilvallez . As this change is really tiny, and the logic that cannot assign index in cpu device is reasonable, could we merge this PR? We will optimize the TP performance on CPU in our next step. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I think can we can merge this without impacting the refactor cc @Cyrilvallez
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Super nice! it's missing:
- some doc about the feature (I did not know we could do this this easily on CPU!)
- a simple fast test to make sure this is no broken in the futur
- rebasing as Update form pretrained to make TP a first class citizen #36335 was merged!
Otherwise much welcome 🤗
Hi @ArthurZucker ,
|
Convert to draft because of the new regression:
|
Sounds great! 🤗 |
CPU device cannot use index.
If we pass index for cpu device, the check will never be passed