内存映射

总览

In the Linux kernel it is possible to map a kernel address space to a user address space. This eliminates the overhead of copying user space information into the kernel space and vice versa. This can be done through a device driver and the user space device interface (/dev).

This feature can be used by implementing the mmap() operation in the device driver’s struct file_operations and using the mmap() system call in user space.

The basic unit for virtual memory management is a page, which size is usually 4K, but it can be up to 64K on some platforms. Whenever we work with virtual memory we work with two types of addresses: virtual address and physical address. All CPU access (including from kernel space) uses virtual addresses that are translated by the MMU into physical addresses with the help of page tables.

A physical page of memory is identified by the Page Frame Number (PFN). The PFN can be easily computed from the physical address by dividing it with the size of the page (or by shifting the physical address with PAGE_SHIFT bits to the right).

内存映射
For efficiency reasons, the virtual address space is divided into user space and kernel space. For the same reason, the kernel space contains a memory mapped zone, called lowmem, which is contiguously mapped in physical memory, starting from the lowest possible physical address (usually 0). The virtual address where lowmem is mapped is defined by PAGE_OFFSET.

On a 32bit system, not all available memory can be mapped in lowmem and because of that there is a separate zone in kernel space called highmem which can be used to arbitrarily map physical memory.

Memory allocated by kmalloc() resides in lowmem and it is physically contiguous. Memory allocated by vmalloc() is not contiguous and does not reside in lowmem (it has a dedicated zone in highmem).

内存映射

Structures used for memory mapping

Before discussing the mechanism of memory-mapping a device, we will present some of the basic structures related to the memory management subsystem of the Linux kernel.

Before discussing about the memory mapping mechanism over a device, we will present some of the basic structures used by the Linux memory management subsystem. Some of the basic structures are: struct page, struct vm_area_struct, struct mm_struct.

struct page

struct page is used to embed information about all physical pages in the system. The kernel has a struct page structure for all pages in the system.

There are many functions that interact with this structure:

  • virt_to_page() returns the page associated with a virtual address
  • pfn_to_page() returns the page associated with a page frame number
  • page_to_pfn() return the page frame number associated with a struct page
  • page_address() returns the virtual address of a struc page; this functions can be called only for pages from lowmem
  • kmap() creates a mapping in kernel for an arbitrary physical page (can be from highmem) and returns a virtual address that can be used to directly reference the page
struct vm_area_struct

struct vm_area_struct holds information about a contiguous virtual memory area. The memory areas of a process can be viewed by inspecting the maps attribute of the process via procfs:

root@qemux86:~# cat /proc/1/maps
#address          perms offset  device inode     pathname
08048000-08050000 r-xp 00000000 fe:00 761        /sbin/init.sysvinit
08050000-08051000 r--p 00007000 fe:00 761        /sbin/init.sysvinit
08051000-08052000 rw-p 00008000 fe:00 761        /sbin/init.sysvinit
092e1000-09302000 rw-p 00000000 00:00 0          [heap]
4480c000-4482e000 r-xp 00000000 fe:00 576        /lib/ld-2.25.so
4482e000-4482f000 r--p 00021000 fe:00 576        /lib/ld-2.25.so
4482f000-44830000 rw-p 00022000 fe:00 576        /lib/ld-2.25.so
44832000-449a9000 r-xp 00000000 fe:00 581        /lib/libc-2.25.so
449a9000-449ab000 r--p 00176000 fe:00 581        /lib/libc-2.25.so
449ab000-449ac000 rw-p 00178000 fe:00 581        /lib/libc-2.25.so
449ac000-449af000 rw-p 00000000 00:00 0
b7761000-b7763000 rw-p 00000000 00:00 0
b7763000-b7766000 r--p 00000000 00:00 0          [vvar]
b7766000-b7767000 r-xp 00000000 00:00 0          [vdso]
bfa15000-bfa36000 rw-p 00000000 00:00 0          [stack]

A memory area is characterized by a start address, a stop address, length, permissions.

A struct vm_area_struct is created at each mmap() call issued from user space. A driver that supports the mmap() operation must complete and initialize the associated struct vm_area_struct. The most important fields of this structure are:

  • vm_start, vm_end - the beginning and the end of the memory area, respectively (these fields also appear in /proc//maps);
  • vm_file - the pointer to the associated file structure (if any);
  • vm_pgoff - the offset of the area within the file;
  • vm_flags - a set of flags;
  • vm_ops - a set of working functions for this area
  • vm_next, vm_prev - the areas of the same process are chained by a list structure
struct mm_struct

struct mm_struct encompasses all memory areas associated with a process. The mm field of struct task_struct is a pointer to the struct mm_struct of the current process.

Device driver memory mapping

Memory mapping is one of the most interesting features of a Unix system. From a driver’s point of view, the memory-mapping facility allows direct memory access to a user space device.

To assign a mmap() operation to a driver, the mmap field of the device driver’s struct file_operations must be implemented. If that is the case, the user space process can then use the mmap() system call on a file descriptor associated with the device.

The mmap system call takes the following parameters:

void *mmap(caddr_t addr, size_t len, int prot,
           int flags, int fd, off_t offset);

To map memory between a device and user space, the user process must open the device and issue the mmap() system call with the resulting file descriptor.

The device driver mmap() operation has the following signature:

int (*mmap)(struct file *filp, struct vm_area_struct *vma);

The filp field is a pointer to a struct file created when the device is opened from user space. The vma field is used to indicate the virtual address space where the memory should be mapped by the device. A driver should allocate memory (using kmalloc(), vmalloc(), alloc_pages()) and then map it to the user address space as indicated by the vma parameter using helper functions such as remap_pfn_range().

remap_pfn_range() will map a contiguous physical address space into the virtual space represented by vm_area_struct:

int remap_pfn_range (structure vm_area_struct *vma, unsigned long addr,
                     unsigned long pfn, unsigned long size, pgprot_t prot);

remap_pfn_range() expects the following parameters:

  • vma - the virtual memory space in which mapping is made;
  • addr - the virtual address space from where remapping begins; page tables for the virtual address space between addr and addr + size will be formed as needed
  • pfn the page frame number to which the virtual address should be mapped
  • size - the size (in bytes) of the memory to be mapped
  • prot - protection flags for this mapping

Here is an example of using this function that contiguously maps the physical memory starting at page frame number pfn (memory that was previously allocated) to the vma->vm_start virtual address:

struct vm_area_struct *vma;
unsigned long len = vma->vm_end - vma->vm_start;
int ret ;

ret = remap_pfn_range(vma, vma->vm_start, pfn, len, vma->vm_page_prot);
if (ret < 0) {
    pr_err("could not map the address area\n");
    return -EIO;
}

To obtain the page frame number of the physical memory we must consider how the memory allocation was performed. For each kmalloc(), vmalloc(), alloc_pages(), we must used a different approach. For kmalloc() we can use something like:

static char *kmalloc_area;

unsigned long pfn = virt_to_phys((void *)kmalloc_area)>>PAGE_SHIFT;

while for vmalloc():

static char *vmalloc_area;

unsigned long pfn = vmalloc_to_pfn(vmalloc_area);

and finally for alloc_pages():

struct page *page;

unsigned long pfn = page_to_pfn(page);

Attention

Note that memory allocated with vmalloc() is not physically contiguous so if we want to map a range alocated with vmalloc(), we have to map each page individually and compute the physical address for each page.

Since the pages are mapped to user space, they might be swapped out. To avoid this we must set the PG_reserved bit on the page. Enabling is done using SetPageReserved() while reseting it (which must be done before freeing the memory) is done with ClearPageReserved():

void alloc_mmap_pages(int npages)
{
    int i;
    char *mem = kmalloc(PAGE_SIZE * npages);

    if (!mem)
        return mem;

    for(i = 0; i < npages * PAGE_SIZE; i += PAGE_SIZE) {
        SetPageReserved(virt_to_page(((unsigned long)mem) + i));

    return mem;
}

void free_mmap_pages(void *mem, int npages)
{
    int i;

    for(i = 0; i < npages * PAGE_SIZE; i += PAGE_SIZE) {
        ClearPageReserved(virt_to_page(((unsigned long)mem) + i));

    kfree(mem);
}

kmmap.c

/*
 * PSO - Memory Mapping Lab(#11)
 *
 * Exercise #1: memory mapping using kmalloc'd kernel areas
 */

#include <linux/version.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/cdev.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <asm/pgtable.h>
#include <linux/sched/mm.h>
#include <linux/sched.h>
#include <asm/io.h>
#include <asm/highmem.h>
#include <linux/rmap.h>
#include <asm/uaccess.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>

#include "../test/mmap-test.h"

MODULE_DESCRIPTION("simple mmap driver");
MODULE_AUTHOR("PSO");
MODULE_LICENSE("Dual BSD/GPL");

#define MY_MAJOR	42
/* how many pages do we actually kmalloc */
#define NPAGES		16

/* character device basic structure */
static struct cdev mmap_cdev;

/* pointer to kmalloc'd area */
static void *kmalloc_ptr;

/* pointer to the kmalloc'd area, rounded up to a page boundary */
static char *kmalloc_area;

static int my_open(struct inode *inode, struct file *filp)
{
	return 0;
}

static int my_release(struct inode *inode, struct file *filp)
{
	return 0;
}

static int my_read(struct file *file, char __user *user_buffer,
		size_t size, loff_t *offset)
{
	/* TODO 2/2: check size doesn't exceed our mapped area size */
	if (size > NPAGES * PAGE_SIZE)
		size = NPAGES * PAGE_SIZE;

	/* TODO 2/2: copy from mapped area to user buffer */
	if (copy_to_user(user_buffer, kmalloc_area, size))
		return -EFAULT;

	return size;
}

static int my_write(struct file *file, const char __user *user_buffer,
		size_t size, loff_t *offset)
{
	/* TODO 2/2: check size doesn't exceed our mapped area size */
	if (size > NPAGES * PAGE_SIZE)
		size = NPAGES * PAGE_SIZE;

	/* TODO 2/3: copy from user buffer to mapped area */
	memset(kmalloc_area, 0, NPAGES * PAGE_SIZE);
	if (copy_from_user(kmalloc_area, user_buffer, size))
		return -EFAULT;

	return size;
}

static int my_mmap(struct file *filp, struct vm_area_struct *vma)
{
	int ret;
	long length = vma->vm_end - vma->vm_start;

	/* do not map more than we can */
	if (length > NPAGES * PAGE_SIZE)
		return -EIO;

	/* TODO 1/7: map the whole physically contiguous area in one piece */
	ret = remap_pfn_range(vma, vma->vm_start,
			virt_to_phys((void *)kmalloc_area) >> PAGE_SHIFT,
			length, vma->vm_page_prot);
	if (ret < 0) {
		pr_err("could not map address area\n");
		return ret;
	}

	return 0;
}

static const struct file_operations mmap_fops = {
	.owner = THIS_MODULE,
	.open = my_open,
	.release = my_release,
	.mmap = my_mmap,
	.read = my_read,
	.write = my_write
};

static int my_seq_show(struct seq_file *seq, void *v)
{
	struct mm_struct *mm;
	struct vm_area_struct *vma_iterator;
	unsigned long total = 0;

	/* TODO 3: Get current process' mm_struct */
	mm = get_task_mm(current);

	/* TODO 3/8: Iterate through all memory mappings */
	vma_iterator = mm->mmap;
	while (vma_iterator != NULL) {
		pr_info("%lx %lx\n", vma_iterator->vm_start, vma_iterator->vm_end);
		total += vma_iterator->vm_end - vma_iterator->vm_start;
		vma_iterator = vma_iterator->vm_next;
	}

	/* TODO 3: Release mm_struct */
	mmput(mm);

	/* TODO 3: write the total count to file  */
	seq_printf(seq, "%lu %s\n", total, current->comm);
	return 0;
}

static int my_seq_open(struct inode *inode, struct file *file)
{
	return single_open(file, my_seq_show, NULL);
}

static const struct file_operations my_proc_file_ops = {
	.owner   = THIS_MODULE,
	.open    = my_seq_open,
	.read    = seq_read,
	.llseek  = seq_lseek,
	.release = single_release,
};

static int __init my_init(void)
{
	int ret = 0;
	int i;
	/* TODO 3/7: create a new entry in procfs */
	struct proc_dir_entry *entry;

	entry = proc_create(PROC_ENTRY_NAME, 0, NULL, &my_proc_file_ops);
	if (!entry) {
		ret = -ENOMEM;
		goto out;
	}

	ret = register_chrdev_region(MKDEV(MY_MAJOR, 0), 1, "mymap");
	if (ret < 0) {
		pr_err("could not register region\n");
		goto out_no_chrdev;
	}

	/* TODO 1/6: allocate NPAGES+2 pages using kmalloc */
	kmalloc_ptr = kmalloc((NPAGES + 2) * PAGE_SIZE, GFP_KERNEL);
	if (kmalloc_ptr == NULL) {
		ret = -ENOMEM;
		pr_err("could not allocate memory\n");
		goto out_unreg;
	}

	/* TODO 1: round kmalloc_ptr to nearest page start address */
	kmalloc_area = (char *) PAGE_ALIGN(((unsigned long)kmalloc_ptr));

	/* TODO 1/2: mark pages as reserved */
	for (i = 0; i < NPAGES * PAGE_SIZE; i += PAGE_SIZE)
		SetPageReserved(virt_to_page(((unsigned long)kmalloc_area)+i));

	/* TODO 1/6: write data in each page */
	for (i = 0; i < NPAGES * PAGE_SIZE; i += PAGE_SIZE) {
		kmalloc_area[i] = 0xaa;
		kmalloc_area[i + 1] = 0xbb;
		kmalloc_area[i + 2] = 0xcc;
		kmalloc_area[i + 3] = 0xdd;
	}

	/* Init device. */
	cdev_init(&mmap_cdev, &mmap_fops);
	ret = cdev_add(&mmap_cdev, MKDEV(MY_MAJOR, 0), 1);
	if (ret < 0) {
		pr_err("could not add device\n");
		goto out_kfree;
	}

	return 0;

out_kfree:
	kfree(kmalloc_ptr);
out_unreg:
	unregister_chrdev_region(MKDEV(MY_MAJOR, 0), 1);
out_no_chrdev:
	remove_proc_entry(PROC_ENTRY_NAME, NULL);
out:
	return ret;
}

static void __exit my_exit(void)
{
	int i;

	cdev_del(&mmap_cdev);

	/* TODO 1/3: clear reservation on pages and free mem. */
	for (i = 0; i < NPAGES * PAGE_SIZE; i += PAGE_SIZE)
		ClearPageReserved(virt_to_page(((unsigned long)kmalloc_area)+i));
	kfree(kmalloc_ptr);

	unregister_chrdev_region(MKDEV(MY_MAJOR, 0), 1);
	/* TODO 3: remove proc entry */
	remove_proc_entry(PROC_ENTRY_NAME, NULL);
}

module_init(my_init);
module_exit(my_exit);

vmmap.c

/*
 * PSO - Memory Mapping Lab(#11)
 *
 * Exercise #2: memory mapping using vmalloc'd kernel areas
 */

#include <linux/version.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/cdev.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/sched.h>
#include <linux/sched/mm.h>
#include <linux/mm.h>
#include <asm/io.h>
#include <linux/uaccess.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>

#include "../test/mmap-test.h"


MODULE_DESCRIPTION("simple mmap driver");
MODULE_AUTHOR("PSO");
MODULE_LICENSE("Dual BSD/GPL");

#define MY_MAJOR	42

/* how many pages do we actually vmalloc */
#define NPAGES		16

/* character device basic structure */
static struct cdev mmap_cdev;

/* pointer to the vmalloc'd area, rounded up to a page boundary */
static char *vmalloc_area;

static int my_open(struct inode *inode, struct file *filp)
{
	return 0;
}

static int my_release(struct inode *inode, struct file *filp)
{
	return 0;
}

static ssize_t my_read(struct file *file, char __user *user_buffer,
		size_t size, loff_t *offset)
{
	/* TODO 2/2: check size doesn't exceed our mapped area size */
	if (size > NPAGES * PAGE_SIZE)
		size = NPAGES * PAGE_SIZE;

	/* TODO 2/2: copy from mapped area to user buffer */
	if (copy_to_user(user_buffer, vmalloc_area, size))
		return -EFAULT;

	return size;
}

static ssize_t my_write(struct file *file, const char __user *user_buffer,
		size_t size, loff_t *offset)
{
	/* TODO 2/2: check size doesn't exceed our mapped area size */
	if (size > NPAGES * PAGE_SIZE)
		size = NPAGES * PAGE_SIZE;

	/* TODO 2/3: copy from user buffer to mapped area */
	memset(vmalloc_area, 0, NPAGES * PAGE_SIZE);
	if (copy_from_user(vmalloc_area, user_buffer, size))
		return -EFAULT;

	return size;
}

static int my_mmap(struct file *filp, struct vm_area_struct *vma)
{
	int ret;
	long length = vma->vm_end - vma->vm_start;
	unsigned long start = vma->vm_start;
	char *vmalloc_area_ptr = vmalloc_area;
	unsigned long pfn;

	if (length > NPAGES * PAGE_SIZE)
		return -EIO;

	/* TODO 1/9: map pages individually */
	while (length > 0) {
		pfn = vmalloc_to_pfn(vmalloc_area_ptr);
		ret = remap_pfn_range(vma, start, pfn, PAGE_SIZE, PAGE_SHARED);
		if (ret < 0)
			return ret;
		start += PAGE_SIZE;
		vmalloc_area_ptr += PAGE_SIZE;
		length -= PAGE_SIZE;
	}

	return 0;
}

static const struct file_operations mmap_fops = {
	.owner = THIS_MODULE,
	.open = my_open,
	.release = my_release,
	.mmap = my_mmap,
	.read = my_read,
	.write = my_write
};

static int my_seq_show(struct seq_file *seq, void *v)
{
	struct mm_struct *mm;
	struct vm_area_struct *vma_iterator;
	unsigned long total = 0;

	/* TODO 3: Get current process' mm_struct */
	mm = get_task_mm(current);

	/* TODO 3/8: Iterate through all memory mappings and print ranges */
	vma_iterator = mm->mmap;
	while (vma_iterator != NULL) {
		pr_info("%lx %lx\n", vma_iterator->vm_start, vma_iterator->vm_end);
		total += vma_iterator->vm_end - vma_iterator->vm_start;
		vma_iterator = vma_iterator->vm_next;
	}

	/* TODO 3: Release mm_struct */
	mmput(mm);

	/* TODO 3: write the total count to file  */
	seq_printf(seq, "%lu %s\n", total, current->comm);
	return 0;
}

static int my_seq_open(struct inode *inode, struct file *file)
{
	return single_open(file, my_seq_show, NULL);
}

static const struct file_operations my_proc_file_ops = {
	.owner   = THIS_MODULE,
	.open    = my_seq_open,
	.read    = seq_read,
	.llseek  = seq_lseek,
	.release = single_release,
};

static int __init my_init(void)
{
	int ret = 0;
	int i;
	/* TODO 3/7: create a new entry in procfs */
	struct proc_dir_entry *entry;

	entry = proc_create(PROC_ENTRY_NAME, 0, NULL, &my_proc_file_ops);
	if (!entry) {
		ret = -ENOMEM;
		goto out;
	}

	ret = register_chrdev_region(MKDEV(MY_MAJOR, 0), 1, "mymap");
	if (ret < 0) {
		pr_err("could not register region\n");
		goto out_no_chrdev;
	}

	/* TODO 1/6: allocate NPAGES using vmalloc */
	vmalloc_area = (char *)vmalloc(NPAGES * PAGE_SIZE);
	if (vmalloc_area == NULL) {
		ret = -ENOMEM;
		pr_err("could not allocate memory\n");
		goto out_unreg;
	}

	/* TODO 1/2: mark pages as reserved */
	for (i = 0; i < NPAGES * PAGE_SIZE; i += PAGE_SIZE)
		SetPageReserved(vmalloc_to_page(vmalloc_area+i));

	/* TODO 1/6: write data in each page */
	for (i = 0; i < NPAGES * PAGE_SIZE; i += PAGE_SIZE) {
		vmalloc_area[i] = 0xaa;
		vmalloc_area[i + 1] = 0xbb;
		vmalloc_area[i + 2] = 0xcc;
		vmalloc_area[i + 3] = 0xdd;
	}

	cdev_init(&mmap_cdev, &mmap_fops);
	ret = cdev_add(&mmap_cdev, MKDEV(MY_MAJOR, 0), 1);
	if (ret < 0) {
		pr_err("could not add device\n");
		goto out_vfree;
	}

	return 0;

out_vfree:
	vfree(vmalloc_area);
out_unreg:
	unregister_chrdev_region(MKDEV(MY_MAJOR, 0), 1);
out_no_chrdev:
	remove_proc_entry(PROC_ENTRY_NAME, NULL);
out:
	return ret;
}

static void __exit my_exit(void)
{
	int i;

	cdev_del(&mmap_cdev);

	/* TODO 1/3: clear reservation on pages and free mem.*/
	for (i = 0; i < NPAGES * PAGE_SIZE; i += PAGE_SIZE)
		ClearPageReserved(vmalloc_to_page(vmalloc_area+i));
	vfree(vmalloc_area);

	unregister_chrdev_region(MKDEV(MY_MAJOR, 0), 1);
	/* TODO 3: remove proc entry */
	remove_proc_entry(PROC_ENTRY_NAME, NULL);
}

module_init(my_init);
module_exit(my_exit);

mmap-test.h

#ifndef __SO2MMAP_H__
#define __SO2MMAP_H__ 1

#define PROC_ENTRY_NAME	"my-proc-entry"

#endif

mmap-test.c

/*
 * PSO - Memory Mapping Lab (#11)
 *
 * Exercise #1, #2: memory mapping between user-space and kernel-space
 *
 * test case
 */

#include <stdio.h>
#include <unistd.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stdlib.h>
#include <assert.h>
#include <string.h>
#include "mmap-test.h"

#define NPAGES		16
#define MMAP_DEV	"/dev/mymmap"
#define PROC_ENTRY_PATH "/proc/" PROC_ENTRY_NAME

void test_contents(unsigned char *addr,
		unsigned char value1, unsigned char value2,
		unsigned char value3, unsigned char value4)
{
	int i;

	for (i = 0; i < NPAGES * getpagesize(); i += getpagesize()) {
		if (addr[i] != value1 || addr[i + 1] != value2 ||
				addr[i + 2] != value3 || addr[i + 3] != value4)
			printf("0x%x 0x%x 0x%x 0x%x\n", addr[i], addr[i+1],
					addr[i+2], addr[i+3]);
		else
			printf("matched\n");
	}
}

int test_read_write(int fd, unsigned char *mmap_addr)
{
	unsigned char *local_addr;
	int len = NPAGES * getpagesize(), i;

	printf("\nWrite test ...\n");
	/* alloc local memory */
	local_addr = malloc(len);
	if (!local_addr)
		return -1;

	/* init local memory */
	memset(local_addr, 0, len);
	for (i = 0; i < NPAGES * getpagesize(); i += getpagesize()) {
		local_addr[i]   = 0xa0;
		local_addr[i+1] = 0xb0;
		local_addr[i+2] = 0xc0;
		local_addr[i+3] = 0xd0;
	}

	/* write to device */
	write(fd, local_addr, len);

	/* are these values in mapped memory? */
	test_contents(mmap_addr, 0xa0, 0xb0, 0xc0, 0xd0);

	printf("\nRead test ...\n");
	memset(local_addr, 0, len);
	/* read from device */
	read(fd, local_addr, len);
	/* are the values read correct? */
	test_contents(local_addr, 0xa0, 0xb0, 0xc0, 0xd0);
	return 0;
}

static int show_mem_usage(void)
{
	int fd, ret;
	char buf[40];
	unsigned long mem_usage;

	fd = open(PROC_ENTRY_PATH, O_RDONLY);
	if (fd < 0) {
		perror("open " PROC_ENTRY_PATH);
		ret = fd;
		goto out;
	}

	ret = read(fd, buf, sizeof buf);
	if (ret < 0)
		goto no_read;

	sscanf(buf, "%lu", &mem_usage);
	buf[ret] = 0;

	printf("Memory usage: %lu\n", mem_usage);

	ret = mem_usage;
no_read:
	close(fd);
out:
	return ret;
}

int main(int argc, const char **argv)
{
	int fd, test;
	unsigned char *addr;
	int len = NPAGES * getpagesize();
	int i;
	unsigned long usage_before_mmap, usage_after_mmap;

	if (argc > 1)
		test = atoi(argv[1]); 

	assert(system("mknod " MMAP_DEV " c 42 0") == 0);

	fd = open(MMAP_DEV, O_RDWR | O_SYNC);
	if (fd < 0) {
		perror("open");
		assert(system("rm " MMAP_DEV) == 0);
		exit(EXIT_FAILURE);
	}

	addr = mmap(NULL, len, PROT_READ, MAP_SHARED, fd, 0);
	if (addr == MAP_FAILED) {
		perror("mmap");
		assert(system("rm " MMAP_DEV) == 0);
		exit(EXIT_FAILURE);
	}

	for (i = 0; i < NPAGES * getpagesize(); i += getpagesize()) {
		if (addr[i] != 0xaa || addr[i + 1] != 0xbb ||
				addr[i + 2] != 0xcc || addr[i + 3] != 0xdd)
			printf("0x%x 0x%x 0x%x 0x%x\n", addr[i], addr[i+1],
					addr[i+2], addr[i+3]);
		else
			printf("matched\n");
	}


	if (test >= 2 && test_read_write(fd, addr)) {
		perror("read/write test");
		assert(system("rm " MMAP_DEV) == 0);
		exit(EXIT_FAILURE);
	}

	if (test >= 3) {
		usage_before_mmap = show_mem_usage();
		if (usage_before_mmap < 0)
			printf("failed to show memory usage\n");

		#define SIZE (10 * 1024 * 1024)
		addr = mmap(NULL, SIZE, PROT_READ, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
		if (addr == MAP_FAILED)
			perror("mmap_");

		usage_after_mmap = show_mem_usage();
		if (usage_after_mmap < 0)
			printf("failed to show memory usage\n");
		printf("mmaped :%lu MB\n",
		       (usage_after_mmap - usage_before_mmap) >> 20);

		sleep(30);

		munmap(addr, SIZE);
	}

	close(fd);

	assert(system("rm " MMAP_DEV) == 0);

	return 0;
}
内存映射内存映射 久许 发布了142 篇原创文章 · 获赞 11 · 访问量 5万+ 私信 关注
上一篇:内存映射机制(mmap)


下一篇:python-以小块创建非常大的NUMPY数组(PyTables与numpy.memmap)