Skip to content

Commit

Permalink
mm/page_poison.c: enable PAGE_POISONING as a separate option
Browse files Browse the repository at this point in the history
Page poisoning is currently set up as a feature if architectures don't
have architecture debug page_alloc to allow unmapping of pages.  It has
uses apart from that though.  Clearing of the pages on free provides an
increase in security as it helps to limit the risk of information leaks.
Allow page poisoning to be enabled as a separate option independent of any
other debug feature.  Because of how hiberanation is implemented, the
checks on alloc cannot occur if hibernation is enabled.  This option can
also be set on !HIBERNATION as well.

Credit to Mathias Krause and grsecurity for original work

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Jianyu Zhan <nasa4836@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
  • Loading branch information
labbott authored and sfrothwell committed Feb 28, 2016
1 parent 4745d9b commit c2c0751
Show file tree
Hide file tree
Showing 5 changed files with 41 additions and 8 deletions.
3 changes: 3 additions & 0 deletions include/linux/mm.h
Original file line number Diff line number Diff line change
Expand Up @@ -2179,10 +2179,13 @@ extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
extern void poison_pages(struct page *page, int n);
extern void unpoison_pages(struct page *page, int n);
extern bool page_poisoning_enabled(void);
extern void kernel_poison_pages(struct page *page, int numpages, int enable);
#else
static inline void poison_pages(struct page *page, int n) { }
static inline void unpoison_pages(struct page *page, int n) { }
static inline bool page_poisoning_enabled(void) { return false; }
static inline void kernel_poison_pages(struct page *page, int numpages,
int enable) { }
#endif

#ifdef CONFIG_DEBUG_PAGEALLOC
Expand Down
22 changes: 21 additions & 1 deletion mm/Kconfig.debug
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,24 @@ config DEBUG_PAGEALLOC_ENABLE_DEFAULT
can be overridden by debug_pagealloc=off|on.

config PAGE_POISONING
bool
bool "Poisson pages after freeing"
select PAGE_EXTENSION
select PAGE_POISONING_NO_SANITY if HIBERNATION
---help---
Fill the pages with poison patterns after free_pages() and verify
the patterns before alloc_pages. The filling of the memory helps
reduce the risk of information leaks from freed data. This does
have a potential performance impact.

If unsure, say N

config PAGE_POISONING_NO_SANITY
depends on PAGE_POISONING
bool "Only poison, don't sanity check"
---help---
Skip the sanity checking on alloc, only fill the pages with
poison on free. This reduces some of the overhead of the
poisoning feature.

If you are only interested in sanitization, say Y. Otherwise
say N.
8 changes: 1 addition & 7 deletions mm/debug-pagealloc.c
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,5 @@

void __kernel_map_pages(struct page *page, int numpages, int enable)
{
if (!page_poisoning_enabled())
return;

if (enable)
unpoison_pages(page, numpages);
else
poison_pages(page, numpages);
kernel_poison_pages(page, numpages, enable);
}
2 changes: 2 additions & 0 deletions mm/page_alloc.c
Original file line number Diff line number Diff line change
Expand Up @@ -1007,6 +1007,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
PAGE_SIZE << order);
}
arch_free_page(page, order);
kernel_poison_pages(page, 1 << order, 0);
kernel_map_pages(page, 1 << order, 0);

return true;
Expand Down Expand Up @@ -1401,6 +1402,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
set_page_refcounted(page);

arch_alloc_page(page, order);
kernel_poison_pages(page, 1 << order, 1);
kernel_map_pages(page, 1 << order, 1);
kasan_alloc_pages(page, order);

Expand Down
14 changes: 14 additions & 0 deletions mm/page_poison.c
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,9 @@ static void check_poison_mem(unsigned char *mem, size_t bytes)
unsigned char *start;
unsigned char *end;

if (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY))
return;

start = memchr_inv(mem, PAGE_POISON, bytes);
if (!start)
return;
Expand Down Expand Up @@ -142,3 +145,14 @@ void unpoison_pages(struct page *page, int n)
for (i = 0; i < n; i++)
unpoison_page(page + i);
}

void kernel_poison_pages(struct page *page, int numpages, int enable)
{
if (!page_poisoning_enabled())
return;

if (enable)
unpoison_pages(page, numpages);
else
poison_pages(page, numpages);
}

0 comments on commit c2c0751

Please sign in to comment.