RDMA/cxgb3: Fix page shift calculation in build_phys_page_list()
authorSteve Wise <swise@opengridcomputing.com>
Mon, 21 Jan 2008 20:42:11 +0000 (14:42 -0600)
committerRoland Dreier <rolandd@cisco.com>
Fri, 25 Jan 2008 22:17:45 +0000 (14:17 -0800)
The existing logic incorrectly maps this buffer list:

    0: addr 0x10001000, size 0x1000
    1: addr 0x10002000, size 0x1000

To this bogus page list:

    0: 0x10000000
    1: 0x10002000

The shift calculation must also take into account the address of the
first entry masked by the page_mask as well as the last address+size
rounded up to the next page size.

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
drivers/infiniband/hw/cxgb3/iwch_mem.c

index a6c2c4ba29e69244b896849b376698a5c2ab7cd2..73bfd1656f86434bae2c36215a6d073607a07a7c 100644 (file)
@@ -122,6 +122,13 @@ int build_phys_page_list(struct ib_phys_buf *buffer_list,
                *total_size += buffer_list[i].size;
                if (i > 0)
                        mask |= buffer_list[i].addr;
+               else
+                       mask |= buffer_list[i].addr & PAGE_MASK;
+               if (i != num_phys_buf - 1)
+                       mask |= buffer_list[i].addr + buffer_list[i].size;
+               else
+                       mask |= (buffer_list[i].addr + buffer_list[i].size +
+                               PAGE_SIZE - 1) & PAGE_MASK;
        }
 
        if (*total_size > 0xFFFFFFFFULL)