/*
* spanned_pages is the total pages spanned by the zone, including
- * holes, which is calcualted as:
+ * holes, which is calculated as:
* spanned_pages = zone_end_pfn - zone_start_pfn;
*
* present_pages is physical pages existing within the zone, which
* by page allocator and vm scanner to calculate all kinds of watermarks
* and thresholds.
*
- * Lock Rules:
+ * Locking rules:
*
- * zone_start_pfn, spanned_pages are protected by span_seqlock.
+ * zone_start_pfn and spanned_pages are protected by span_seqlock.
* It is a seqlock because it has to be read outside of zone->lock,
* and it is done in the main allocator path. But, it is written
* quite infrequently.
* frequently read in proximity to zone->lock. It's good to
* give them a chance of being in the same cacheline.
*
- * Writing access to present_pages and managed_pages at runtime should
+ * Write access to present_pages and managed_pages at runtime should
* be protected by lock_memory_hotplug()/unlock_memory_hotplug().
* Any reader who can't tolerant drift of present_pages and
* managed_pages should hold memory hotplug lock to get a stable value.
* Read access to zone->managed_pages is safe because it's unsigned long,
* but we still need to serialize writers. Currently all callers of
* __free_pages_bootmem() except put_page_bootmem() should only be used
- * at boot time. So for shorter boot time, we have shift the burden to
+ * at boot time. So for shorter boot time, we shift the burden to
* put_page_bootmem() to serialize writers.
*/
void __meminit __free_pages_bootmem(struct page *page, unsigned int order)