Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added tensor_parallelism examples #3047

Merged
merged 5 commits into from
Aug 23, 2024
Merged

Added tensor_parallelism examples #3047

merged 5 commits into from
Aug 23, 2024

Conversation

cehongwang
Copy link
Collaborator

Description

Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

Fixes # (issue)

Type of change

Please delete options that are not relevant and/or add your own.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_simple_example.py	2024-07-30 18:59:39.314909+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_simple_example.py	2024-07-30 19:07:41.231563+00:00
@@ -75,11 +75,11 @@
    backend=backend,
    options={
        "truncate_long_and_double": True,
        "enabled_precisions": {torch.float32, torch.float16},
        "use_python_runtime": True,
-        "min_block_size": 1
+        "min_block_size": 1,
    },
    dynamic=False,
)

for i in range(10):

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_simple_example.py	2024-08-05 20:24:04.383629+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_simple_example.py	2024-08-05 20:25:40.820373+00:00
@@ -75,11 +75,11 @@
    backend=backend,
    options={
        "truncate_long_and_double": True,
        "enabled_precisions": {torch.float32, torch.float16},
        "use_python_runtime": True,
-        "min_block_size": 1
+        "min_block_size": 1,
    },
    dynamic=False,
)

for i in range(10):
--- /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_llama3.py	2024-08-05 20:24:04.383629+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_llama3.py	2024-08-05 20:25:40.827300+00:00
@@ -2,11 +2,13 @@
import torch_tensorrt
from llama3_model import Transformer, ModelArgs
from torch.distributed._composable.fsdp import MixedPrecisionPolicy
from torch.distributed._composable.fsdp.fully_shard import fully_shard
from torch.distributed._tensor import Replicate, Shard
-from torch.distributed.algorithms._checkpoint.checkpoint_wrapper import checkpoint_wrapper
+from torch.distributed.algorithms._checkpoint.checkpoint_wrapper import (
+    checkpoint_wrapper,
+)
from torch.distributed.device_mesh import DeviceMesh
from torch.distributed.tensor.parallel import (
    ColwiseParallel,
    PrepareModuleInput,
    RowwiseParallel,
@@ -14,10 +16,11 @@
    parallelize_module,
)
import time
from torch.distributed.device_mesh import init_device_mesh
import os
+

# Taken and modified pytorch lightening
# https://lightning.ai/lightning-ai/studios/tensor-parallelism-supercharging-large-model-training-with-pytorch-lightning
def parallelize(model: Transformer, tp_mesh: DeviceMesh) -> Transformer:
    """Apply parallelisms and activation checkpointing to the model.
@@ -32,12 +35,14 @@
        # 2. Parallelize the root norm layer over the sequence dim
        # 3. Shard the first transformer block's inputs

        # Parallelize the first embedding and the last linear out projection
        plan = {
-            "tok_embeddings": RowwiseParallel(input_layouts=Replicate(),
-                                              output_layouts=Shard(1),),
+            "tok_embeddings": RowwiseParallel(
+                input_layouts=Replicate(),
+                output_layouts=Shard(1),
+            ),
            "output": ColwiseParallel(
                input_layouts=Shard(1),
            ),
            "norm": SequenceParallel(),
        }
@@ -83,11 +88,18 @@
_world_size = int(os.environ["WORLD_SIZE"])


tp_mesh = init_device_mesh(device_type="cuda", mesh_shape=(_world_size,))

-model_args = ModelArgs(vocab_size=128256, dim=8192, n_layers=80, n_heads=64, rope_theta=500000.0, n_kv_heads=8)
+model_args = ModelArgs(
+    vocab_size=128256,
+    dim=8192,
+    n_layers=80,
+    n_heads=64,
+    rope_theta=500000.0,
+    n_kv_heads=8,
+)

# model_args = ModelArgs(vocab_size=32000, dim=2048, n_layers=8, n_heads=32)
model = Transformer(model_args).to("cuda")
model = parallelize(model, tp_mesh)
model.eval()
--- /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/llama3_model.py	2024-08-05 20:24:04.383629+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/llama3_model.py	2024-08-05 20:25:40.985216+00:00
@@ -166,18 +166,26 @@
    """

    def __init__(self, model_args: ModelArgs):
        super().__init__()
        self.n_heads = model_args.n_heads
-        self.n_kv_heads = model_args.n_heads if model_args.n_kv_heads is None else model_args.n_kv_heads
+        self.n_kv_heads = (
+            model_args.n_heads
+            if model_args.n_kv_heads is None
+            else model_args.n_kv_heads
+        )
        self.n_rep = self.n_heads // self.n_kv_heads
        self.head_dim = model_args.dim // model_args.n_heads

-        self.wq = nn.Linear(model_args.dim, model_args.n_heads * self.head_dim, bias=False)
+        self.wq = nn.Linear(
+            model_args.dim, model_args.n_heads * self.head_dim, bias=False
+        )
        self.wk = nn.Linear(model_args.dim, self.n_kv_heads * self.head_dim, bias=False)
        self.wv = nn.Linear(model_args.dim, self.n_kv_heads * self.head_dim, bias=False)
-        self.wo = nn.Linear(model_args.n_heads * self.head_dim, model_args.dim, bias=False)
+        self.wo = nn.Linear(
+            model_args.n_heads * self.head_dim, model_args.dim, bias=False
+        )

    def init_weights(self, init_std: float):
        for linear in (self.wq, self.wk, self.wv):
            nn.init.trunc_normal_(linear.weight, mean=0.0, std=0.02)
        nn.init.trunc_normal_(self.wo.weight, mean=0.0, std=init_std)
@@ -214,11 +222,13 @@
        xk = keys.transpose(1, 2)  # (bs, n_local_heads, seqlen, head_dim)
        xv = values.transpose(1, 2)  # (bs, n_local_heads, seqlen, head_dim)

        # we use casual mask for training
        output = F.scaled_dot_product_attention(xq, xk, xv, is_causal=True)
-        output = output.transpose(1, 2).contiguous()  # (bs, seqlen, n_local_heads, head_dim)
+        output = output.transpose(
+            1, 2
+        ).contiguous()  # (bs, seqlen, n_local_heads, head_dim)
        output = output.view(bs, seqlen, -1)
        return self.wo(output)


class FeedForward(nn.Module):
@@ -445,6 +455,6 @@

        Returns:
            Transformer: Transformer model.

        """
-        return cls(model_args)
\ No newline at end of file
+        return cls(model_args)

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_simple_example.py	2024-08-05 23:53:07.924207+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_simple_example.py	2024-08-05 23:54:49.457335+00:00
@@ -75,11 +75,11 @@
    backend=backend,
    options={
        "truncate_long_and_double": True,
        "enabled_precisions": {torch.float32, torch.float16},
        "use_python_runtime": True,
-        "min_block_size": 1
+        "min_block_size": 1,
    },
    dynamic=False,
)

for i in range(10):

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_simple_example.py	2024-08-05 23:54:08.089203+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_simple_example.py	2024-08-05 23:55:48.759331+00:00
@@ -75,11 +75,11 @@
    backend=backend,
    options={
        "truncate_long_and_double": True,
        "enabled_precisions": {torch.float32, torch.float16},
        "use_python_runtime": True,
-        "min_block_size": 1
+        "min_block_size": 1,
    },
    dynamic=False,
)

for i in range(10):

@narendasan
Copy link
Collaborator

@cehongwang can you lint so we can just merge these in?

@github-actions github-actions bot added component: api [Python] Issues re: Python API component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Aug 20, 2024
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_llama3.py	2024-08-20 22:53:14.793295+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_llama3.py	2024-08-20 22:55:07.454019+00:00
@@ -13,10 +13,11 @@
from torch.distributed.device_mesh import DeviceMesh, init_device_mesh

# Taken and modified pytorch lightening
# https://lightning.ai/lightning-ai/studios/tensor-parallelism-supercharging-large-model-training-with-pytorch-lightning
import logging
+
_rank = int(os.environ["RANK"])
_world_size = int(os.environ["WORLD_SIZE"])
tp_size = 2

logger = logging.getLogger()
@@ -54,11 +55,11 @@
            "truncate_long_and_double": True,
            "enabled_precisions": {torch.float32, torch.float16},
            "use_python_runtime": True,
            "workspace_size": 1 << 33,
            "debug": False,
-            "timing_cache_path":"/opt/file/cache/timing_cache_llama.bin"
+            "timing_cache_path": "/opt/file/cache/timing_cache_llama.bin",
        },
        dynamic=False,
    )
    for i in range(15):
        # seeding with dp_rank to ensure identical inputs for TP groups
--- /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_simple_example.py	2024-08-20 22:53:14.793295+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_simple_example.py	2024-08-20 22:55:07.477068+00:00
@@ -75,11 +75,11 @@
    backend=backend,
    options={
        "truncate_long_and_double": True,
        "enabled_precisions": {torch.float32, torch.float16},
        "use_python_runtime": True,
-        "min_block_size": 1
+        "min_block_size": 1,
    },
    dynamic=False,
)

for i in range(10):
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/_compiler.py	2024-08-20 22:53:14.805295+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/_compiler.py	2024-08-20 22:55:08.153113+00:00
@@ -361,24 +361,27 @@
    # Store TRT replicas of Torch subgraphs
    trt_modules = {}
    # Iterate over all components that can be accelerated
    # Generate the corresponding TRT Module for those
    logger.info(f"-" * 100)
-    logger.info(f"There are {len(list(partitioned_module.named_children()))} submodules in total.")
+    logger.info(
+        f"There are {len(list(partitioned_module.named_children()))} submodules in total."
+    )
    i = 0
    import os
+
    for name, _ in partitioned_module.named_children():
        # Benchmark log utilities
        i += 1
        logger.info(f"-" * 100)
        logger.info(f"Start compiling {i}th submodule")
        total = torch.cuda.get_device_properties(0).total_memory

        submodule = getattr(partitioned_module, name)
        # Criteria for a module to be convertible to TRT
        if settings.use_fast_partitioner and "_run_on_acc" not in name:
-        # if (settings.use_fast_partitioner and "_run_on_acc" not in name) or int(os.environ["RANK"]) == 1:
+            # if (settings.use_fast_partitioner and "_run_on_acc" not in name) or int(os.environ["RANK"]) == 1:
            dryrun_tracker.to_run_in_torch.extend(parse_non_trt_nodes(submodule))
            continue

        subgraph_data = PerSubgraphData()
        subgraph_data.subgraph_name = name

@github-actions github-actions bot removed component: api [Python] Issues re: Python API component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Aug 22, 2024
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_simple_example.py	2024-08-22 21:42:59.714106+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_simple_example.py	2024-08-22 21:44:38.881398+00:00
@@ -75,11 +75,11 @@
    backend=backend,
    options={
        "truncate_long_and_double": True,
        "enabled_precisions": {torch.float32, torch.float16},
        "use_python_runtime": True,
-        "min_block_size": 1
+        "min_block_size": 1,
    },
    dynamic=False,
)

for i in range(10):

Copy link
Collaborator

@narendasan narendasan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@narendasan narendasan merged commit 846fdd2 into main Aug 23, 2024
54 of 67 checks passed
@narendasan narendasan deleted the tensor-parallelism branch August 23, 2024 18:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants