Review Article
Understanding and Addressing the High Prevalence of Anemia in India: Nutritional, Interventional and Monitoring Challenges
Barua S*
Independent Researcher, Assam, India
*Corresponding author:Shounak Barua, Independent Researcher Assam, India. Email Id: shounak.barua@gmail.com
Article Information:Submission: 05/09/2024; Accepted: 28/09/2024; Published: 30/09/2024
Copyright: ©2024 Barua S. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Anemia is a global health concern characterized by lower than normal levels of hemoglobin, which affects approximately 30% of the world population. India contributes towards a large proportion of the total globally affected. Low dietary intake of iron is a primary cause for anemia prevalence in India. Despite nationwide efforts such as oral supplementation programs and fortification of food, the prevalence of anemia continues to rise, according to India’s 5th National Family Health Survey. This review focuses on highlighting the limitations of current intervention practices, while exploring alternative strategies that could be potentially adapted in conjunction with existing measures. It is crucial that drawbacks in existing monitoring and diagnostic practices towards examining population susceptibility and reporting accurate demographics of iron deficiency and anemia in India be assessed. Finally, this review also illustrates the efficacy of adaptable household measures in determining the nutritional status at both community and national levels.
Keywords:Iron Deficiency; Anemia; Iron Deficiency Anemia; IDA; India; Nutrition; Food Processing; Fortification; Intervention; Iron Status
Introduction
Anemia is a globally prevalent clinical condition, which is
characterized by lower than normal levels of red blood cells (RBC)
and hemoglobin (Hb) in the blood stream [1]. This condition limits
the amount of oxygen reaching cells and tissues of the body. Typical
symptoms of anemia include fatigue, dizziness, shortness of breath,
headache, pale or yellow skin, and irregular heartbeat. Depending upon
the causative factors that contribute to a depletion of RBCs, anemia
has been classified into different categories, amongst which irondeficiency
anemia (IDA) is the most prevalent form of this condition
that affects nearly 30% of the global population [2]. Currently, the
segment of the global population that is most susceptible to anemia is
constituted by children under five years of age, menstruating women
belonging to adolescent and adult age groups, pregnant women,
and postpartum women. Factors that subject women to a higher
risk of developing IDA include blood-loss during menstruation and
childbirth, as well as the growing demand of iron to nourish the fetus
in pregnant women. Moreover, severe IDA during pregnancy carries a
‘high’ risk of premature delivery, often leading to underweight infants,
or in worse cases infant and maternal death [3].
Developing nations such as India are subjected to a greater burden
of anemia prevalence within the population, in comparison to their
‘developed’ counterparts. As per the National Family Health Survey 5
(NFHS-5) conducted in India between 2019 and 2021, anemia affects
roughly 67.1% of children between 6-59 months of age, 59.1% of
adolescent women between 15-19 years of age, and 52.2% of pregnant
women between 15-49 years of age in India [4]. The numbers have
only continued to rise among all segments of the population since
the previous NFHS-4 conducted between 2015 and 2016 (Figure 1).
It wasalso highlighted that India contributed towards approximately
80% of all maternal deaths due to anemia in South-Asia, while, 58%
of all lactating Indian women were anemic [3]. A low dietary intake
of iron has been regarded as the primary cause behind such a massive
prevalence of anemia among Indians [5]. Iron deficiency among
women and children in India is a major nutritional concern, which is
often caused by insufficient dietary intake of iron to make up for the
iron utilized towards physiological and metabolic needs.
Figure 1:Percentage (%) of the total population affected by anemia in India
as reported in NFHS-5 (2019-2021) and NFHS-4 (2015-2016). The data has
been shown for seven different categories of the population [4,127].
Although, the alarming rise in anemia within the Indian
population has acquired the interests of the government, nongovernmental
organizations and researchers alike, efforts towards
promoting iron supplementation and routine diagnostic testing
have not succeeded in effectively decreasing the percentage of the
affected population. Concurrently, there is a lack of comprehensive
research literature that accurately links the current demographics to
its real drivers, which would otherwise enable appropriate responses
to be formulated. Anemia may also result from underlying chronic
inflammatory diseases, genetic mutations, vitamin B12 deficiency,
infections, and autoimmune diseases [6, 7]. However, Indian efforts
towards mitigating the burden of anemia has been largely restricted
to the diagnosis and treatment of IDA, with disregard to the existence
of other forms of anemia. It is worth noting that the Indian population
is ethnically diverse with a rich genetic diversity and varying genetic
predisposition to different clinical conditions. Moreover, different
cultural groups engage in varying lifestyles that includes heterogeneity
in sources of nutrition and cooking methods.
Current relevant questions include whether the number of reported
cases of anemia are accurate, what percentage and distribution of total
cases of anemia are in fact the result of iron deficiency, why is the
occurrence of iron deficiency high despite oral iron supplementation
programs, and what other forms of institutional and/or household
interventions apart from oral supplementation could be adapted to
address this nation-wide increase in anemia? So far, these questions
have eluded sufficient attention from the scientific and policymaking
communities. This review seeks to highlight the role of Indian dietary
patterns on anemia prevalence and elucidate the shortcomings in
existing interventional strategies, while exploring both conventional
and novel solutions that could potentially mitigate the burden of
anemia in India. Additionally, this literature also provides an overview
of the drawbacks in current diagnostic practices to determine anemia
and iron deficiency across the Indian population to promote the
reporting of more accurate and representative demographic data. In
total, 129 references were reviewed for the purpose of this study.
Pathogenesis of Ida:
Iron (Fe) is an essential micronutrient, which forms a key
component in the structure and activity of oxygen-transporting
proteins, Hb and myoglobin (Mb) [8]. Iron is primarily stored in
the form of ferritin, a protein that is synthesized by hepatocytes,
macrophages and enterocytes [9]. The iron balance is determined
by the amount of iron consumed through diet, the physiological
requirements of the individual and the amount of iron stored as
ferritin [10]. ‘Iron deficiency’ refers to a drop in the levels of totalbody
iron, which translates to a depletion of the iron reserves stored
in macrophages and hepatocytes. This state of depleted iron stores
often precedes the onset of ‘anemia’, which is marked by a decrease
in Hb and RBC levels in blood. Anemia typically occurs due to loss
of RBCs from the body, inefficient RBC production in bone marrow,
and/or hemolysis of RBC [11].The pathogenesis of IDA can be differentiated into three
characteristic stages [11]. The first stage is characterized by depleting
levels of body iron that may arise due to a multitude of reasons
including inefficient absorption from meals or blood-loss during
menstruation, which promotes the utilization of storage iron (ferritin)
to meet the requirements of erythropoiesis. Following the exhaustion
of storage iron, during the second stage, erythropoiesis occurs in a
state of iron deficiency, wherein production of Hb and myoglobin
becomes limited. This stage is often labelled as ‘iron deficiency
without anemia’, since erythrocytes maintain morphological structure,
and circulating hemoglobin levels remain unaffected. Moreover,
an increase in transferrin/total iron binding capacity (TIBC) and a
decrease in % saturation of transferrin is noted during the second
stage [12]. Transferrin refers to the iron transporting molecule, the
measurement of which represents the total iron binding capacity
(TIBC) of blood, while TSAT refers to the percentage of TIBC that
is bound by serum iron. Prolonged deficiency of iron in the body
inevitably leads to IDA, which is marked by reduction in the levels
of Hb below normal and marks the third stage. The inadequate levels
of Hb leads to the development of hypochromia and microcytosis in
erythrocytes. Therefore, it is crucial to diagnose early onsets of iron
deficiency in segments of the population who are at a higher risk of
developing IDA.
Clinical conditions such as polycystic ovarian disease (PCOD)
and polycystic ovary syndrome (PCOS) subject women to a higher
risk of developing iron deficiency and anemia. While both conditions
are related, PCOS is generally considered a much broader and more
severe, since it affects both endocrine and metabolic functions, where
symptoms include hormonal imbalance, insulin resistance, infertility,
menstrual irregularity, hirsutism, obesity, and cardiovascular diseases.
Although not fully understood, anovulation is widely regarded as the
starting point, which subsequently develops cysts and inflammation
in the ovaries, before a spike in androgen secretion is observed [13].
PCOS has been shown to confer iron deficiency and anemia among
women in two different ways. Firstly, irregular or excessive menstrual
bleeding may lead to iron deficiency and subsequently anemia.
Secondly, chronic inflammation and oxidative stress related to the
symptom of obesity in PCOS patients upregulates the production of
hepcidin, which in turn restricts the absorption of iron in the small
intestine, inadvertently leading to iron deficiency [14].
Thyroid disorders such as hyperthyroidism (overactive thyroid)
and hypothyroidism (underactive thyroid) also predominantly affect
a large percentage of women. Hypothyroidismis often accompanied
by anemia, which is caused by different biomolecular mechanisms.
During hypothyroidism, low levels of thyroid hormones fail to promote
erythropoiesis due to a decrease in erythropoietin gene expression,
which decreases RBC levels [15]. Additionally, hypothyroidism
also decelerates metabolism rateto inflict gastrointestinal changes,
whereinfood passes at a slower pace within the digestive tract. A slow
digestive process coupled with decreased gastric acid production
inhibits the absorption of iron, folate, and vitamin B12 in the
intestine, which may lead to iron deficiency and anemia. It has also
been documented that hypothyroidism may result from pre-existing
iron deficiency and its associated anemia. Thyroid peroxidase is an
important enzyme in the synthesis of thyroid hormone, the activity of
which is largely regulated by iron as a cofactor. During iron deficient
conditions, the activity of thyroid peroxidase is down regulated,
which suppresses thyroid hormone production [16].
Diagnostic Approaches:
Numerous blood-based molecules and cells serve as biomarkers
in the diagnosis of iron-deficiency and anemia [17]. Based upon
biomarker concentrations in blood, respective stages of iron deficiency
and anemia may be diagnosed. In clinical settings, a complete blood
count (CBC) test is ideally the first test performed, which determines
the count of RBCs, Hb, hematocrit, mean corpuscular volume
(MCV), mean corpuscular hemoglobin (MCH), mean corpuscular
hemoglobin concentration (MCHC), red cell distribution width
(RDW), leukocytes, and platelets from blood samples. The red blood
cell indices from CBC tests serve as a proxy indicator of iron status in
individuals [18]. The periodical NFHS in India primarily measures
the hemoglobin levels of a large number of sample populations to
determine anemia prevalence. However, it was concluded that the
sole focus on Hb levels obtained through CBC tests only managed
to detect anemia, and failed to discriminate between subjects with
normal iron status and those with iron deficiency “without” anemia
[18]. Furthermore, low Hb concentrations may also result from other
forms of anemia such as folate or vitamin B12 deficiency, and hence is
not an IDA-specific biomarker [19]. Other parameters of CBC such as
MCV, MCH, and RDW have practically shown inconsistent sensitivity
and specificity in detecting early onsets of iron deficiency [18, 20].A serum ferritin (SF) test is often recommended following a CBC
analysis to determine the iron status of an individual with low Hb
levels [21]. The physiological role of ferritin in iron storage makes
it highly sensitive to changes in iron levels in serum. Therefore, its
quantification enables detection of iron deficiency with greater
sensitivity, as compared to CBC parameters, especially in the
diagnosis of early onsets of iron deficiency, preceding anemia. The
extracellular ferritin concentrations measured in serum is secreted
by macrophages [22]. Although, the lower cutoff values of serum
ferritin enable diagnosis of iron deficiency, underlying conditions of
inflammation may result in elevated levels of this biomarker, which
may misguide the interpretation of results in patients [23]. Hence,
complementing serum ferritin tests with a measure of transferrin
saturation (TSAT) is recommended to alleviate the possibility of
iron status overestimation. The serum iron bound to transferrin
is primarily utilized for the purpose of erythropoiesis, and hence
provides a reliable measure of iron status in patients. However, the
relatively higher cost and lower accessibility of SF tests in India have
led to a wider application of CBC tests in clinical settings, which fails
to account for existing iron deficiency within the population, the
prolongation of which may lead to anemia.
Additionally, the point-of-care testing (POCT) device
“HemocueHb 201”, used during NFHS-4 and NFHS-5 to measure Hb
levels from capillary samples, has been shown to generate data that
are less accurate and precise, as compared to hematology analyzers
[4, 24, 127]. Furthermore, a study highlighted that the WHO
recommended Hb cutoff values for anemia led to an overestimation
of the percentage of anemia-affected population during NFHS-5 [25].
This deviation was attributed to the fact that the WHO cutoff values,
which were based on studies conducted on North American and
European populations, predominantly of ethnic white backgrounds,
to not have been representative of the Indian population. Therefore,
a reevaluation of the Hb cutoff values to suit the age and gender
variations specific to India was proposed, which accounts for ethnic
variations in physiological demands, geographical differences,
and the prevalence of other pathological conditions. Similarly, a
reexamination of SF cutoff values to account for the widespread
occurrence of infections and inflammatory conditions in India,
along with considerations for geographical and ethnic differences
would allow accurate measurement of iron status in India. In India,
where CBC tests are more accessible and affordable than SF tests, a
combination of multiple CBC parameter cutoffs is recommended to
accurately discriminate between the diagnosis of iron deficiency and
IDA.
Genetic Predisposition To Iron Deficiency Anemia:
Some rare forms of anemia arise due to genetic mutations in
particular genes or as genetically inherited disorders. Iron Refractory
Iron Deficiency Anemia (IRIDA) is an inherited autosomal recessive
disorder that inhibits the absorption of iron from diets [26]. Oral
supplementation of iron during conditions of IRIDA shows marginal
improvement in the iron status of an individual. It is typically caused
by a mutation in the Transmembrane Serine Protease-6 (TMPRSS6)
gene, which encodes for proteolytic enzyme “Matriptase-2” [27]. This
enzyme plays a key role in the negative regulation of hepcidin, which
in turn regulates the homeostasis of iron in the body [28]. The role of
hepcidin in the iron metabolism has been described as the peptide
responsible for the degradation of ferroportin during conditions of
iron overload [29].The knockout of TMPRSS6 gene (TMPRSS6-/-) in
mice elevated hepcidin production, resulting in inhibited absorption
of iron into intestine and blood plasma, which resulted in a substantial
reduction in the plasma iron levels and transferrin saturation [28].
Additionally, TMPRSS6-/- mice displayed hallmarks of anemia such
as hypochromia, anisocytosis, and poikilocytosis, validated by a
depletion of red blood cell indices.Genome-Wide Associations Studies (GWAS) have been extensively
performed to identify single nucleotide polymorphisms (SNPs) of
TMPRSS6 in human populations. So far, approximately 50 SNPs have
been identified, among which the occurrence of rs855791, rs4820268,
and rs11704654 has been studied most widely and linked to poor iron
status[30, 31]. A study on SNPs of TMPRSS6 and respective Hb levels
in European and Indian Asian ethnic populations revealed that the ‘A’
allele of rs855791 associated with low Hb levels, was more frequently
observed within the Indian Asian population, as compared to
Europeans [32]. Moreover, another study highlighted the occurrence
of ‘A’ allele of rs855791 with an additional reduction of 0.07 g/dL and
2.24 μg/L in the Hb levels and ferritin concentrations, respectively,
in Asian population, as compared to Caucasian population [33]. The
high frequency of this allele increases the susceptibility of Asian ethnic
populations to IDA. More recently, an evaluation of an iron-deficienct
pediatric cohort in the Indian subcontinent reported that 38% (23
patients) of the total iron refractory patients (60 patients) showed
IRIDA phenotypes, among which 12 out of 23 cases demonstrated
intronic and exonic variations [34]. However, to elucidate the genetic
predisposition of the Indian population to IRIDA, further GWAS must
be conducted, which accounts for the variations in SNPs and their
occurrence in different ethnic groups of the Indian subcontinent. So
far, six frequently occurring SNPs have been identified in the Indian
population (Table 1). However, risk alleles of identified SNPs among
the Indian population have not been extensively studied yet.
Table 1:Single nucleotide polymorphisms (SNPs) of the TMPRSS6 gene
observed within Indian subjects [128]
Existing therapeutic strategies for IRIDA have focused on
countering the oral iron refractoriness through intravenous (IV)
administration of iron into bloodstream [35]. While a progressive
improvement in Hb count, which may or may not lead up to its normal
levels is typically observed upon IV iron administration, itresults
in only partial correction of IRIDA. This long-term therapy fails to
address the effects ofmicrocytosis and the low levels of transferrin
saturation that are characteristic to IRIDA. In order to develop a
holistic approach towards the treatment of IRIDA, recent research
efforts have been directed towards identifying biomolecular targets
involved in the pathogenesis of this condition. A group explored the
possible therapeutic implications of silencing hepcidin by employing
antibodies, which target the inhibition of Hemojuvelin (HJV) protein
co-receptor involved in the production of hepcidin [36]. This study
observed a significant improvement in Hb levels of TMPRSS6-/-
mice upon IV injection of anti-HJV antibody (h5F9-AM8), with
a peak in Hb levels being attained at 2 weeks from administration,
followed by a gradual decline through week 8. Moreover, no
histopathological abnormalities were observed in the spleen and liver
upon IV administration of antibodies. Although rare in occurrence,
refractoriness to oral iron supplementation poses a major challenge
in treating IRIDA. Therefore, further research towards elucidating
novel biomolecular participants in the pathogenesis of this disorder
is crucial.
Current Intervention Strategy:
Primary interventional strategies against anemia in India have
been limited to oral supplementation of iron among at-risk segments
of the population [37]. Oral iron and folic acid (IFA) supplements in
the form of tablets is an age old interventional strategy, which was
first implemented in 1970 through the launch of ‘National Nutritional
Anemia Prophylaxis Program’ (NNAPP) by the Government of India
with the objective of combating anemia at a community level. In
recent years, the NNAPP has integrated with newer programs such as
‘Anemia Mukt Bharat’ (AMB), ‘National Iron Plus Initiative’ (NIPI),
and the National Health Mission’s (NHM) ‘Weekly Iron Folic Acid
Supplementation’ to improve overall community outreach and extend
its network nationally. NIPI recommends dosage ranging between
20mg to 100mg of elemental iron in combination with folic acid. The
efficacy of iron supplementation has been well established, where
clinical trials reported increase in Hb levels of patients, who took iron
supplements for a period of three months [38].Yet, the occurrence of anemia has only increased within all
segments of the population since NFHS-4, which was conducted
between 2015 and 2016 (Figure 1). Lack of compliance within
the population plays a hindering role towards the efficacy of oral
supplementation programs. Some influencing factors for poor
compliance include undesired side-effects, gastrointestinal discomfort,
unclear dosage instruction, poor counselling from prescriber, pseudoscientific
beliefs, forgetfulness, and a lack of general awareness among
consumers about the implications of anemia towards maternal and
child health [3]. Furthermore, misguided public perception of oral
supplements as a form of medicine discourages continuation of its
consumption after clinical improvements are noted [39]. Therefore,
there is a need to incorporate additional measures in conjunction with
oral supplementation, which is suitable and acceptable to the large
and diverse population of India.
Dietary Sources of Iron:
Through diet, iron is primarily acquired in two different
forms:heme iron and non-heme iron [40]. The biomolecular
mechanism by which both these forms of iron are absorbed into the
duodenal cells vary. ‘Heme’ is a prosthetic group, which provides
functionality to Hb molecules. Consisting of a ferrous (Fe2+) cation
surrounded by a porphyrin ring, heme iron is ideally found in animalbased
foods such as red meat, poultry meat, liver and fish. Heme iron
is absorbed into the enterocytes by transporter protein “Heme Carrier
Protein 1” (HCP1) in the duodenal region of the small intestine [41].
Inside the enterocytes, heme iron is oxidized by the heme oxygenase
enzyme to release the Fe2+ cation from the heme molecule [41].On the other hand, in the intestinal lumen non-heme iron typically
exists in the ferric form (Fe3+), which is unable to cross the intestinal
membrane due to its insolubility in the intestinal pH. Absorption
of non-heme iron into enterocytes is preceded by the reduction
of ferric cations (Fe3+) into its ferrous form (Fe2+) by ferrireductase
enzyme ‘duodenal cytochrome B’ (DcytB) [42]. Subsequently,
Fe2+ ions are transported into the enterocytes by the activity of the
divalent metal transporter 1 (DMT1), a transporter protein that lines
the brush-like border in the inner wall of the small intestine. Nonheme
iron is abundant in plant sources of food such as pulses, nuts,
legumes and dark green leafy vegetables. Unlike heme iron, which
is readily absorbed into the intestinal membrane, non-heme iron
is comparatively less bioavailable; a characteristic attributed to the
precipitation of ferric (Fe3+) iron from aqueous solutions in the alkaline
conditions of the intestinal lumen [42]. Moreover, the additional step
involving the reduction of Fe3+ to Fe2+ to allow transport across the
intestinal membrane makes the absorption process slower.
The dietary trends of the Indian subcontinent is diverse owing
to variations in cultural practices, social identity and religion [43].
This heterogeneity, which includes variety in the type and quantity of
ingredients used, as well as different cooking methods across subsets
of the Indian population poses a significant challenge in summarizing
the Indian diet into an average form [44]. These characteristic
differences in the mode of nutrition plays a significant role in defining
the iron status in terms of bioavailability, within different subsets of the
Indian population. However, if food sources are to be considered, the
traditional Indian diets across most regions are primarily vegetarian,
with the inclusion of vegetables, grains and fruits, while animalbased
food products such as dairy, meat, fish and eggs are variably
consumed in a limited manner [44]. Apart from social constructs,
the unaffordability of meat due to its high cost and lower incomes
among consumers also contributes significantly towards its marginal
consumption [5,44]. In the Indian diet, non-heme iron constitutes
approximately 95% of total iron consumed [44, 45]. This may be
indicative towards the fact that even among meat consumers, a large
proportion of the total iron is acquired through plant sources of nonheme
iron, which contributes to lower rates of absorption.
Dietary Factors Affecting Iron Bioavailability:
The rate of non-heme iron absorption is largely affected by the
components of the food matrix. Such components may include macro
and micro-nutrients, fiber, polyphenols, and antinutrient factors
such as phytates. Depending upon whether iron absorption is upregulated
or down-regulated, these components may be classified as
enhancers or inhibitors, respectively. Some examples of enhancers
include ascorbic acid (vitamin C), citrates, alcohol, and organic acids,
while phytates, fiber, polyphenols, and vegetable proteins may act as
inhibitors of iron absorption. The total iron absorbed by an individual
is greatly influenced by the final bioavailable content of iron, which is
determined by the net effect of all enhancers and/or inhibitors on the
total iron content in their diets.The impact of alcohol consumption on iron absorption is
multifaceted; it has been shown to both enhance and inhibit iron
absorption under varying conditions. Alcohol consumption has
been linked to diminished production of hepcidin in the liver, which
ultimately elevates serum iron levels [46]. The inhibitory effect on
hepcidin production has been attributed to alcohol-induced oxidative
stress and the formation of reactive oxygen species (ROS), which
subdues the transcription of hepcidin. However, excessive alcohol
consumption has also been found to increase risks of iron deficiency
and anemia. Prolonged alcohol consumption is the cause for internal
bleedings within the gastrointestinal system, which causes loss of
erythrocytes [47]. Additionally, alcohol negatively affects the process
of hematopoiesis in bone marrow, thus inducing anemia.
Depending upon its source, protein affects the bioavailability of
iron differently. Proteins derived from plant sources, dairy sources,
and eggs play an inhibitory role towards iron absorption, while animal
sources of protein enhance iron absorption. Phosvitin is a protein,
primarily found in egg-yolks, which inhibits iron absorption [48]. It
is a highly phosphorylated protein with high-affinity towards iron,
which causes formation of insoluble complexes, thereby limiting iron
bioavailability. Protein from cow’s milk such as casein demonstrates
an inhibitory role in the absorption of iron by virtue of phosphoserine
groups present within its structure [49]. Casein forms strong bonds
with iron, which prevents iron absorption in the duodenal region of
the intestine. Additionally, calcium is abundantly present in milk,
which competes with iron for transportation across the intestinal
membrane via DMT1. On the other hand, animal proteins native
to muscle tissues have been found to significantly enhance iron
absorption [50]. Cysteine and histidine amino acid residues within
the structural configuration of muscle proteins bind iron to form
soluble complexes, which aids absorption.
Ascorbic acid in diets: Ascorbic Acid (Vitamin C) is a watersoluble
antioxidant, which functions as an enhancer of iron absorption
in the small intestine. As a reducing agent ascorbic acid facilitates the
reduction of Fe3+ ions into Fe2+ form, which is readily absorbed
into the intestinal cells [51]. Additionally, the chelating property of
ascorbic acid promotes the formation of soluble complexes with iron,
which subsequently improves the solubility and bioavailability of iron
in the intestinal pH [52]. In a study investigating the influence of
fruit juices on iron absorption from rice meals, it was observed that
the ascorbic acid content of juices was positively associated with the
extent of iron absorption [53].
In contrast to single-meal studies, a diminished effect of ascorbic
acid on iron absorption was noted in complete diet studies, which was
attributed to the broad biochemical composition that is representative
of total diets [54]. A possible dampening effect of the residual gastric
contents from previous meals in a total diet on the activity of ascorbic
acid was proposed. Moreover, it was revealed that ascorbic acid only
conferred a prominent increase in iron absorption when consumed
with meals, which naturally contained high content of iron absorption
inhibitors such as phytates and polyphenols [55-57]. Additionally, a
study of serum ferritin levels indicated that optimal absorption only
occurred when ascorbic acid was consumed during meals, while
consumption away from meal time or in between meals did not
improve iron status [58].
It is imperative that the efficacy of ascorbic acid towards improving
the iron status of the Indian population be studied, while accounting
for the population’s diverse dietary trends and consumption patterns.
Studies of complete diets translate more closely to real-life dietary
habits, which until now have shown marginal improvement in iron
status, when consumed in conjunction with ascorbic acid. Further
examination of Indian whole diets and its effects on the efficacy of
ascorbic acid towards enhancing iron absorption is called for. In a
largely plant-based Indian diet, the consumption of vitamin C is
encouraged, since it has been shown to mitigate the iron absorption
inhibiting effects of phytates and polyphenols present in plants.
Tea consumption in India:Tea (Camellia sinensis) is a popular
beverage consumed by cultures across the globe. Although, drinking
of tea has been associated with numerous health benefits such as
improvement of cardiovascular health, antioxidant activity, and antiinflammatory
properties, tea consumption has been shown to lower
the bioavailability of iron in meals. Phenolic compounds such as
tannins, which are present in substantial amounts in tea inhibit the
absorption of iron in the intestinal lumen by chelating iron to form
insoluble complexes [59]. Thus, in the presence of polyphenols from
tea, the amount of free iron available for absorption in the intestinal
lumen decreases significantly.
India is the largest consumer of tea and holds place as the second largest
producer of tea globally. Although, the per capita consumption
of tea is lower in India when compared to the global standards, nearly
88% of Indian households spanning across all socio-economic classes
consume this beverage. A survey conducted by the Tea Board of
India, which aimed to assess the trends in domestic consumption of
tea reported that roughly 80% of tea drinkers consume tea before or
during their breakfast, which is often the first most nutritious meal
of the day [60]. Although there is a lack of comprehensive literature
which focuses on the correlation of tea drinking with iron deficiency,
especially among Indian tea drinkers, the extent to which the
absorption of non-heme iron could be reduced by the consumption
of tea alongside iron-fortified meals has been demonstrated to be
around 79-94% in a previous study on human subjects [61]. In a
separate study, it was observed that iron absorption was lowered by
approximately 75-80%, when tea was consumed within an hour from
a meal [55].
The role of tea as a potential cause for widespread iron deficiency
and IDA within the Indian population warrants further investigation
and appropriate public health interventions. Currently, a literature
search focusing on the impacts of tea consumption on the Indian iron
status reveals little information. A 50-70% drop in iron absorption was
observed in Indian women between 18-35 years of age, irrespective
of their pre-existing IDA or iron replete (control) status [62].
However, this study merely attempted to highlight the implications
of tea consumption on two groups: women with IDA and an iron
replete control group. These groups were selected based on specific
criteria and subjected to a number of restrictions, which included
consumption of only test meals and cutting off of vitamin-mineral
supplements, which may not be representative of real-life scenarios.
Another cross-sectional study on schoolchildren from Kerala, India,
reported anemia among approximately 34.2% of tea/coffee drinkers,
and 26.9% of non-drinkers [63]. There was however no comparison
of the difference in severity of anemia between the tea/coffee drinkers
and the non-drinkers, as well as no information regarding the ratio of
tea drinkers to coffee drinkers among the anemic cases. Additionally,
an overview of other enhancers/inhibitors of iron absorption within
the regular diets of the anemic tea/coffee drinkers was absent,
which would have allowed to determine if anemia prevalence in
schoolchildren was in fact related to tea consumption.
Hence, the results of such studies may not be wholly relevant in
a wider population-based model such as India, especially in terms of
the heterogeneity and complexity associated with age, gender, diet,
genetics and iron status. To accurately conclude if tea consumption in
India is indeed a major driver of iron deficiency and IDA, several longterm
randomized control trials need to be conducted [64]. Although,
physiological adaptation of the iron metabolism by up-regulating or
down-regulating iron absorption in response to an individual’s iron
status has been well documented, it is still not entirely clear as to what
extent the body’s adaptive responses to normal or depleted iron stores
affect intestinal iron absorption in the presence of tea polyphenols
[65, 66].
Dietary Diversification:
A household measure towards improving nutritional status
encompasses active consumption of a wide variety of foods that
are rich sources of various nutrients. This practice is referred to as
dietary diversification, which ensures a balanced intake of all essential
nutrients, and has been recognized as a diet of “good quality” [67].
Dietary diversification has been positively associated with improved
micronutrient status in populations [68]. The extent to which the iron
status of a population may be improved through dietary diversification
is determined by the net iron available for absorption in diets. Hence,
dietary diversification should take into account the sum of total iron
content consumed from a variety of foods, the type of iron consumed
based on its source, as well as the presence of enhancers and inhibitors
of iron absorption in diets.Numerous studies have defined dietary diversity as a proxy
indicator of micronutrient status. One such study determined that
Ethiopian children, who consumed lesser than four groups of food per
day were more likely to develop anemia than children who consumed
diets of greater variety [69]. Additionally, South Ethiopian pregnant
women with low dietary diversity were found more likely to be
anemic than those with highly diverse diets [70]. Hence, a diverse diet
is recommended as a long-term approach to improve the iron status
of populations. The added advantage of dietary modification to ensure
diversity is that it achieves adequate intake of multiple micronutrients
and thereby, eliminates multiple nutritional deficiencies in the process.
However, the predominantly plant-based diet of the Indian
population lacks diversity, wherein households rely on a few sources
of staple grains and vegetables with limited consumption of animal
products [71]. Such “dietary monotony”, especially reliance on mostly
non-heme iron sources inadvertently leads a population to absorb
insufficient iron, which may subsequently lead to iron deficiency and
anemia. Dietary diversity in India is largely influenced by a plethora
of socio-economic factors. A study focusing on both rural and urban
households in the Indian state of Uttar Pradesh established a positive
relationship between household income levels and dietary diversity
[72]. This observed relationship was consistent with results from a
similar study on pregnant women from Kenya [73]. The correlation
was explained by increased accessibility and affordability to a wider
range of food sources with improving financial conditions. Moreover,
a direct positive association was noted between the amount and size
of land owned by rural Indian households, and dietary diversity, since
land in a rural setting constitutes a financial asset from which, revenue
may be generated [72].
Another socio-economic determinant of nutritional adequacy
and dietary diversity within households is level of education, wherein
maternal education level plays a key role towards the extent of
nourishment in children under five years of age. A higher level of
education typically correlates with a greater dietary diversity, and vice
versa. Education typically determines the level of dietary diversity in
two different ways: through its direct influence on employability and
income levels of an individual or a household, and/or through health
literacy [74]. The prevalence of iron deficiency and its related anemia
in a large segment of India’s rural population can be attributed to an
absence of diverse diets, which in turn stem from issues relating to
levels of income, education, and accessibility to iron-rich food groups.
Therefore, the need for interventions to improve India’s iron status
goes beyond simple dietary diversification.
Impacts of Food Processing on Iron:
Most diets across the globe involve pre-processing and processing/
cooking of food before consumption in order to extend shelf-life,
enhance digestibility, eradicate pathogenic microorganisms and
toxic chemicals for food safety, increase bioavailability of nutrients,
and improve ‘palatability’. Based on the nature of transformation
of food, processing techniques may be categorized as physical
(grinding, peeling, emulsification, spray drying, etc.), chemical
(refining, gelation, stabilization, etc.), and biochemical (fermentation,
sterilization, pasteurization, etc.). These physical, chemical and
biochemical changes subsequently alters the structural and
functional characteristics of food, which affects the bioaccessibility
and bioavailability of its components. Likewise, the amount of
bioaccessible and bioavailable iron present in food may increase or
decrease based upon the cooking or processing technique employed.The diverse set of traditional cooking methodologies characteristic
to the Indian household, coupled with novel processing/cooking
techniques such as microwave and air-frying, ultimately determines
the percentage of iron content in raw food that remains available for
absorption in the small intestine. Conventional cooking methods in
India entail frying, boiling, roasting, pressure cooking, and steaming
[75]. Some pre-processing and processing techniques include
soaking, sprouting, fermentation, dehulling, milling, and blanching.
Each method affects the concentration and bioavailability of iron
differently (Figure 2).
Figure 2:Percentage (%) increase and decrease observed in total iron
concentration and iron bioavailability of food upon subjecting to six different
food pre-processing and processing techniques
Soaking& Germination:Soaking is the practice of immersing
foods such as legumes, nuts, and grains in water with the aim of
softening, cleaning and improving its palatability. Sprouting is a
similar process in which the legumes or grains are soaked, drained,
and rested until the seeds germinate. Soaking and sprouting have been
shown to improve the bioaccessibility of iron in legumes and grains
[76]. Dephytinization is a process in which the phytate content of
plant-based foods are lowered or removed to improve bioaccessibility
and bioavailability of minerals. The presence of water activates an
endogenous phosphatase enzyme in plants known as phytase, which
in turn hydrolyzes phytate by the removal of phosphate molecules [77].
Thus, soaking and sprouting brings about a reduction in the level of
phytates present in plant cells, thereby improving the bioaccessibility
and bioavailability of iron in plant-based diets.
A study noted a 28.2% and 39.8% decrease in iron content upon
soaking of green and white faba beans, respectively [76]. However,
the phenomenon was attributed to the leaching out of iron into
the soaking medium, which could be retained for further use to
minimize iron loss. Additionally, the same study reported an increase
in bioavailability of iron of both types of faba beans upon soaking
and sprouting. Moreover, a reduction in the phytate content of both
green and white faba beans was noted. The mechanism behind such
reduction of phytate content in this study has been explained by a
combination of the activity of endogenous phytase enzyme, as well as
water solubilization of phytic acid salts [76].
Dehulling:Dehulling refers to the process of removing the
“hull”, the outermost covering of seeds, legumes and grains in order
to improve texture and digestibility of food. The removal of hull
around the kernel may be achieved by methods of grinding, rolling
and pounding. The hull houses a variety of antinutrient factors such
as polyphenols, phytates, and fibers, which limits the bioavailability
of iron. Moreover, the hard outer covering of hull inhibits the
bioaccessibility of iron stored within the kernel. Therefore, dehulling
confers a significant improvement in the bioaccessibility and
bioavailability of iron in plant-based foods. A previous study
focusing on dehulling of four different types of legumes, namely
cowpea, green gram, lentil, and chickpea demonstrated a substantial
decrease in phytic acid and tannin content by 47-52 % and 43-52 %,
respectively [78]. These results implied that a considerable amount of
anti nutrients were present in the outer covering. Although the iron
content of legumes decreased upon dehulling, which was attributed to
a large proportion of the mineral being stored in the hull, dehulling
improved the bioavailability of remaining iron.
Milling:Milling refers to the process of grinding, crushing, or
pulverizing grains, seeds, and nuts to convert into powdered forms
such as flour and ground spices. This technique also serves to separate
components of seed and grains for the purpose of refining. This process
is often carried out following dehulling, in order to further refine the
dehulled products. In refined grains, the milling process removes the
bran and the germ from the seed to leave behind the endosperm,
whereas, whole grains consist of all three components intact. While
whole grains are a great source of fiber, proteins, vitamins, and even
minerals, which exist in high concentrations within the bran, these
grains also house substantial amounts of polyphenols and phytates
within the bran, the presence of which effects the bioavailability of
iron.
Several studies have reported statistically significant reduction
in the phytic acid content of cereal grains upon the removal of their
bran through the process of milling [79]. However, milling also
contributed to the loss of essential micronutrients such as iron which
may be present in high concentrations within the bran. Milling of
rice resulted in a reduced content of iron, as compared to brown
rice with their brans intact [80]. Furthermore, the iron content of
rice cultivars were negatively associated with respect to milling
durations. The net change in total iron absorbed from milled food
products, when compared with unmilled wholegrain food products,
is determined by the total decrease in iron content and total increase
in iron bioavailability following milling. Therefore, as a measure to
ensure a net positive change in total iron absorbed, food products
often undergo post-milling iron fortification.
Fermentation:For thousands of years, fermentation has been
utilized and developed as a popular processing technique to produce
a wide variety of food and beverages. Some popular applications of
fermentation include bread making, cheese making, wine production,
beer brewing, and yoghurt fermentation. By definition, fermentation
refers to the metabolic pathway of anaerobic respiration in
microorganisms, by which carbohydrate (substrate) is converted into
organic acids, alcohol and carbon dioxide. Depending upon the type of
substrate and microorganism utilized, the products vary significantly.
Fermentation has been shown to improve the bioaccessibility of iron
in cereals and legumes in multiple ways. Microbial phytase, as well as
endogenous phytase in plants have been shown to hydrolyze phytates
during fermentation [81]. Secondly, the action of phytase and
α-amylase enzymes disrupts the complex phytate and starch matrix,
inside which iron remains embedded [82]. In addition, organic acids
synthesized as a by-product during fermentation lowers the pH to an
optimal level for iron to solubilize. The low pH also provides favorable
conditions for enzymatic degradation of phytate [81, 82].
Extensive studies have demonstrated the negative correlation
between fermentation and phytate content in a number of cereals
and legumes. One of the studies reported a 60% reduction in the
phytic acid content of four different Sudanese sorghum cultivars
upon fermentation for 12 hours [83]. The effects of fermentation
on the bioaccessibility of iron was assessed in three popular Indian
breakfast foods, Idli (rice : black-gram = 2 : 1), Dosa (rice : blackgram
= 3 : 1), and Dhokhla (chickpea : green-gram : black-gram :
rice = 2 : 2 : 1 : 1) [84]. The fermented batters consisting of rice
and black bean combinations in idli and dosa oversaw an increase
in iron bioaccessibility by 276% and 127%, respectively, while no
improvement was observed in dhokhla batter. Additionally, tannin
content of all three types of fermented batter was completely removed,
while phytate content was reduced differentially. However, dhokhla
batter recorded the lowest reduction in its phytate content among
all batters, where a considerable amount of phytate remained after
fermentation. This observation was attributed to the addition of
chickpea and green gram to the rice and black gram combination,
which added to the phytic acid content of the batter.
Blanching:Blanching is a pre-processing technique, wherein
food is rapidly heated to a preset temperature, usually through brief
immersion in boiling water or steam, before subsequently cooling it
down rapidly to halt the cooking process. This technique is primarily
employed towards preserving food through enzyme inactivation at
high temperatures, retention of organoleptic properties, improvement
of food texture, and prevention of microbial contamination.
Types of food that are commonly blanched include vegetables,
legumes, nuts, fruits, seafood, meat, and poultry. The physical and
metabolic modifications in food cells due to blanching makes the
membranes more permeable, which inevitably causes loss of watersoluble
vitamins and minerals through leaching [85]. The extent to
which micronutrients may be lost is largely determined by the set
temperature and time in blanching, the maturity and variety of food,
the blanching medium, cooling medium, and surface area to volume
ratio of cut pieces of food.
Blanching of 12 species of Nigerian vegetables for 5 minutes
in boiling water resulted in a 14.1% to 45.4% reduction in iron
concentration [86]. However, blanching is also responsible for a
reduction in the levels of antinutrient factors such as tannins and
phytate, which significantly improves the bioavailability of iron
[87]. A study demonstrated 53.7% to 73.8% and 46.23% to 88.47%
reduction of phytic acid and tannic acid concentrations, respectively,
in the leaves of cabbage, collard, turnip, sweet potato, and peanut,
upon blanching for 10 minutes at 98±1°C [88]. In order to retain
much of the iron content of raw food, blanching for a short duration
is recommended. Blanching time has been negatively correlated with
the mineral content of food [89]. Novel blanching techniques such as
‘dry blanching’, which eliminates the use of liquid blanching mediums
have gained popularity in the food industry due to prevention of
iron leaching. Other novel blanching methods include microwave
blanching, infrared blanching, and high pressure blanching. The use
of microwave blanching was shown to retain a greater percentage of
phytochemicals in potatoes, as compared to conventional blanching
with water [90].
Boiling:Boiling is a common method of cooking across the
world, where food items such as vegetables, grains, legumes, and meat
are submerged in a boiling solvent (primarily water) until it reaches
a desired level of consistency (texture, thickness, hardness, etc.).
Much like blanching, boiling is responsible for the loss of a significant
amount of micronutrients, including iron [91]. In fact, boiling
contributed to a greater loss in micronutrient content as compared
to blanching, since boiling is performed for a greater duration of time
to allow complete and thorough cooking of food. The same group
compared the effects of three cooking methods, namely microwaving,
blanching, and boiling on the iron content of spinach, brussel sprouts,
and broccoli [91]. Boiling resulted in the highest percentage of iron
losses among all three cooking methods at 52, 28, and 60 percent for
spinach, brussel sprouts, and broccoli, respectively. However, since
iron loss occurs primarily through leaching into the boiling medium,
the retention of the boiling medium (stock) for further cooking or
serving allows the lost iron to be retained [92].
Cooking with Iron Utensils:The use of iron utensils in various
cooking practices have shown increased iron content in food, and
led to improved Hb levels [93]. Iron from utensils leach out into the
cooking medium to increase its total iron content. Therefore, it has
been recognized as a cost-effective method, suitable for tackling iron
deficiency and IDA in low-to-middle income countries. The extent
to which iron content of food may increase is determined by the pH
of food, the moisture/water content of food, as well as the cooking
duration [94, 95]. The use of lemon in distilled water, which produced
an acidic pH (pH = 3.2) demonstrated the highest leaching of iron
when cooked in iron utensils, as compared to meals prepared in
higher pH [94].
Additionally, the rate of leaching of iron into food is also dependent
upon the age of the iron cooking ware [96]. The efficiency of iron
utensils in enriching food with iron tends to decrease with usage, and
frequent use accelerates the ageing process. Although, iron utensils
have been a popular choice of cooking utensil in Indian households,
adaptable strategies to optimize the extraction of iron from these
utensils are still not widely known. Awareness about the benefits of
utilizing iron utensils, along with methods of cooking to maximize
iron extraction shall inevitably benefit the process of combatting iron
deficiency and IDA in India. Moreover, a reason for lack of compliance
among Indian households towards iron pots and cooking utensils
stem from the tendency of iron to oxidize (rust). However, rusting
could be prevented through seasoning of iron utensils with a layer of
cooking oil. The extent to which leaching of iron may be affected upon
seasoning of iron utensils is yet to be determined. Hence, it is crucial
to conduct further investigations that assess the impact of seasoning
on the leaching properties of iron utensils.
Food Fortification:
Towards reducing the prevalence of iron deficiency and anemia
in India, several governmental and non-governmental initiatives have
adapted an interventional strategy towards fortification of food with
iron. Fortification of staple foods such as wheat flour, rice, and salt
have been widely conducted as a cost-effective measure due to their
widespread consumption. The government of India has promoted
iron fortified rice through social safety net programs such as
Integrated Child Development Services (ICDS), PM-POSHAN, and
targeted Public Distribution System (PDS). These initiatives sought
widespread distribution of rice fortified with ferric pyrophosphate
(FPP) and sodium ferric ethylenediaminetetraacetate (NaFe3EDTA)
[97]. FPP is an insoluble form of iron, which is micronized to increase
its total surface area to maximize absorption, whereas NaFe3EDTA is
more water-soluble, which maintains bioavailability in the presence
of inhibitors in the diet. In a meta-analysis of 15 studies, a substantial
increase in Hb concentration was recorded upon consumption of
iron fortified rice, as compared to control groups who consumed
unfortified rice [98]. Therefore, iron fortification of rice has been
recommended as a population-level intervention to improve the
mean hemoglobin levels of countries such as India, where rice is
staple, and the population is burdened with a massive prevalence of
iron deficiency and anemia.Wheat and its flour are also staple to the Indian diet, with
widespread consumption across parts of northern and western India.
In 2018, the Food Safety and Standards Authority of India (FSSAI)
mandated set levels, as well as suitable forms of iron fortificants for
the fortification of both whole wheat and refined wheat flour in India
[99, 100]. Although not mandatory, wheat fortification has gained
momentum in India due to the presence and efforts of organizations
such as Food Fortification Initiative (FFI), Global Alliance for
Improved Nutrition (GAIN), and their partner institutions.
NaFe3EDTA has been an effective fortificant of wheat flour due
to its ability to inhibit iron-phytate interactions [101]. The use of
NaFe3EDTA fortified whole wheat flour as an intervention reported
a 67% and 51% reduction in the cases of iron deficiency and IDA,
respectively, in iron-deplete primary schoolchildren of both urban
and rural India [101].
However, the efficacy of iron fortification may be restricted
due to factors including, but not limited to the type of staple food
employed as vehicles for iron delivery, the form of iron fortificant
utilized, pre-existing conditions leading to iron refractoriness, and
processing/cooking technologies employed to prepare iron-fortified
foods. Washing and rinsing of fortified rice prior to cooking in order
to remove dirt and dust has been shown to decrease its coated iron
content [102]. Moreover, the hindering effects of inflammation
and infection on iron absorption have been well documented,
where an increased production of hepcidin by the liver during such
physiological states inhibits the transport of iron into plasma [103].
The prevalence of inflammation and infection are high in tropical
regions of the world such as Asia, Africa, and Latin America [104].
In such conditions, iron fortified food as a dietary intervention may
not display the intended efficacy in reducing the burden of iron
deficiency. In India, infections such as malaria, dengue, typhoid,
and tuberculosis are widely prevalent [105]. Yet, studies which
have examined the efficacy of iron fortified foods in infection and
inflammation-prone populations are lacking in India. Therefore, in
order to fully elucidate the effectiveness of interventions through iron
fortification, an assessment of both infected and non-infected subjects
is essential. Other than inflammation driven by infections, obesity related
inflammation has also been shown to up regulate hepcidin
production [106].
By far, ferrous sulfate has been the favored form of iron for
fortification, given its water solubility, as well as its similarity to native
food iron in terms of its bioavailability and its efficacy in the presence
of enhancers/inhibitors [104]. However, iron fortification also leads
to undesired sensory changes in food, which varies based upon the
form of iron utilized for fortification. This issue remains a primary
cause for lack of compliance among consumers and intervention
cohorts. These sensory degradations may occur in the form of change
in taste and development of odor due to rancidity, as well as change
in the color and texture of food. Generally, the most water-soluble
and bioavailable forms of iron fortificants tend to confer the highest
degree of sensory changes in terms of flavor and color of the food.
Less water-soluble forms of iron, such as ferrous fumarate imparts a
comparatively lower degree of sensory degradation in food. Hence,
the ideal choice of iron fortificant remains a compromise between
its solubility, bioavailability, cost, and the acceptability of sensory
changes in food [104].
of iron have been developed and tested. In addition to its reduced
reactivity within the food matrix such as limited lipid oxidation, it
also preserves the bioavailability of iron fortificants. Hydrogenated
vegetable oil is the choice of capsule for iron fortificants such as
fumarates and sulfates [104]. In recent years, the development of
micro-encapsulation and nano-encapsulation technologies has
made significant strides towards increasing the bioavailability and
promoting controlled release of iron in the gut. Nano-encapsulation
of iron in bovine serum albumin (BVA) nanoparticles, used as a
fortificant in stirred yoghurt demonstrated stability, while increasing
blood count and iron parameters in IDA-induced rats [107]. No
adverse effects were noted in the liver, kidney, and spleen. This form
of nano-encapsulation also minimized lipid oxidation, while viscosity
and water-holding capacity of the food matrix were enhanced, thus
increasing the sensorial acceptability of food. The present challenge
that stands in the way of widely utilizing this technology is its cost effectiveness,
with current costs soaring roughly three to four times
higher than the cost of conventional iron fortification.
Biofortification:
Biofortification refers to the development and production of
crops with enhanced levels and quality of nutrients in them [108].
It is often employed towards improving the nutrient profile of staple
foods in low and middle-income populations as a strategy to improve
their nutritional status. Biofortification of crops may be achieved
through conventional breeding techniques, genetic engineering of
crops, or agronomic fortification through addition of fertilizers.
Besides increasing the total concentration of micronutrients in plants,
improving overall bioavailability also falls well within the scope of
biofortification. When compared to traditional food fortification,
biofortification stands out as a significantly sustainable technique
due to permanent enhancement to crops, thereby eliminating the
need to fortify repeatedly. Apart from its high cost of development,
biofortified crops can be grown by farmers without incurring any
ongoing costs such as expenses related to additional procurement of
fortificants.Agronomic biofortification is defined as the enhancement of the
micronutrient content of plants through the use of fertilizers, which
serves as a temporary measure, since the qualities are not passed
down to subsequent progenies [108]. Agronimic biofortification is
often used to complement other forms of biofortification. A range of
rice, wheat, maize, and millet have been successfully biofortified to
contain an enhanced iron profile [109]. However, amongst different
micronutrients that have been targeted for biofortification, the
agronomic approach only works well with zinc, selenium, and iodine,
whereas iron is poorly absorbed by plants when added in the form of
inorganic fertilizers [110]. This phenomenon has been attributed to
the conversion of ferrous iron to its ferric form in the soil, which does
not absorb readily into plants.
Biofortification through breeding involves crossing between
plants with desirable traits to produce hybrid/segregant progenies
that contain agronomical phenotypes of both parents. Parent plant
lines featuring desirable traits are selected, following which they are
crossed over multiple generations [111]. Repeated back crossing
and wide crossing is performed to separate undesirable traits or
phenotypes that may have passed down from either parents. An
iron-enriched rice variety (IR68144) was developed by crossing semidwarf
variety (IR8) and Taichung (Native)-1 variety of rice to feature
high iron content, high yield, increased disease resistance, and high
seed vigor [112]. Moreover, after polishing for 15 minutes, IR68144
retained approximately 80% of its iron, which was significantly higher
compared to other red and white pericarp varieties. Additionally,
IR68144 rice variety was demonstrated to improve the iron status of
non-anemic Filipino women [112].
In recent years, scientific advances in gene editing has promoted
the development of genetically modified (GM) crops. The primary
advantages of genetic modification over conventional breeding are
its efficiency and precision in target gene insertion into plants, the
relatively shorter time-frame within which plants with desired
traits can be developed as compared to breeding through multiple
generations, and the ability to stack multiple genes of interest
within a plant to express more than one desirable traits. Iron
biofortification through genetic engineering has been achieved in a
number of different ways, in a variety of crops [113]. The insertion
of “lactoferrin” gene, an iron-binding protein derived from humans,
into dehusked rice demonstrated an increase of 120% in its seed iron
content [114]. This percentage increase in iron was subsequently
deemed suitable for infant supplementation. Ferritin is another gene
of interest, primarily due to its capacity for binding 4500 ferric ions
per molecule. The transfection of both rice and lettuce with soybean
ferritin gene resulted in significantly higher concentrations of iron
[113]. Additionally, the co-expression of nicotianamine synthase
(NAS) gene alongside ferritin conferred a six fold increase in the
iron content of rice. Nicotianamine (NA) is a chelator of iron which
plays an important role in the uptake, translocation, distribution, and
storage of iron in plants [115]. Alternatively, the bioavailability of iron
may be improved in crops through dephytinization. Introduction
of genes, which synthesize phytase substantially decreased phytate
content in rice and wheat [113].
In India, where wheat and rice serve as primary sources of
carbohydrates, biofortification of these staple crops could potentially
improve national iron status [116, 117]. Between 2017 and 2021,
the Indian Council of Agricultural Research (ICAR) developed 28
biofortified wheat cultivars, out of which 15 varieties showcased an
increased iron content [118]. Each of these cultivars thrive on very
specific soil and climatic conditions, which has led to dispersed
cultivation across the different states of India. The efficacy of these
crops within the population is yet to be determined. Extensive
interventional studies within the Indian population to assess
community-wide efficacy of such biofortified crops would provide
clearer insights. However, a greater effort towards promoting the
penetration of such biofortified crop varieties into the regular diets
of a larger segment of the population, as well as encouraging research
initiatives towards the development and cultivation of such crops
commercially could potentially assist the mitigation of anemia burden
in India.
A major hurdle in the path of crop biofortification in India is the
country’s reluctance towards the commercial cultivation of GM crops
and GMOs for the purpose of food. So far Indian biofortification
practices have been limited to agronomic and conventional breeding
techniques. Although a variety of GM crops displaying a plethora of
desirable traits have been developed and extensively studied, Bacillus
thuringiensis(Bt) cotton was the sole GM crop that was approved for
widespread cultivation by the Government of India till date. Such
abstinence towards commercial cultivation of GM crops, especially
for the purpose of food stem from ethical concerns associated with
environmental risks such as escape of “engineered” traits into the
wild, safety issues pertaining to human health, as well as the welfare
of small-scale farmers against market monopoly of large corporations
[119]. There also exists ethical dilemmas, where questions relating
to the moral implications of altering the “natural” state of a living
organism, serve as strong arguments.
The complex regulatory framework governing the development,
trials, release, and biosafety of GM crops in India is composed of six
committees. Functioning under the “Environmental (Protection) Act
1986” of the “Ministry of Environment, Forest & Climate Change”
(MOEFCC), the six committees have been appointed advisory,
approval, and monitoring roles (Figure 3). Moreover, the FSSAI
maintains stringent regulations pertaining to production, storage,
import, distribution, and sale of GM food crops. In 2022, an Indian
transgenic herbicide-resistant variety of mustard, “Dhara Mustard
Hybrid-11” (DMH-11), gained approval for environmental release by
the “Genetic Engineering Appraisal Committee” (GEAC), following
which a coalition of environmental and food activists, NGOs, and
scientists filed petitions to voice their disapproval of GM crop release
in India [120]. Soon after, the Supreme Court of India, on the advice
of its “Technical Expert Committee” (TEC) halted the release of
DMH-11 due to arising concerns about the potentially higher use of
herbicides on such herbicide-tolerant plants, which would ultimately
impact the health and well-being of the large mustard consuming
population of India [120].
Figure 3:The regulatory framework governing the development and
commercialization of Genetically Modified (GM) under the ‘Rules 1989’
consists of six committees overlooking three different roles, implemented
by the Ministry of Environment, Forest & Climate Change (MoEFCC),
Department of Biotechnology (DBT), Ministry of Science & Technology,
Government of India, and State Governments [129].
The potential in millets: The cultivation of millets has been
regarded as one of the most sustainable crop production methods,
which require minimal resource input and can be grown in arid
soil conditions. Millet’s high content of protein, micronutrient, and
fiber, as well as its gluten-free property confers health benefits such
as improved cardiovascular and gut health, and decreased risk of
diabetes mellitus, obesity and cancer [121, 122]. The rising popularity
of millets has been attributed to active campaigns and promotions
regarding the benefits associated with its cultivation and consumption
[123]. The FAO declared the year 2023 as the official ‘Year of Millets’
with the aim of raising awareness about this crop to agricultural
communities across the world. India is the leading producer of millet
in the world, making up for roughly 20% of all millets produced
globally [124]. According to the “Ministry of Agriculture and Farmers’
Welfare”, the common varieties of millet produced in India are pearl
millet, finger millet, and sorghum. Although there was a decline in
millet consumption in India since 1962 through 2010, recent years
have witnessed an active interest towards reviving the production and
consumption of millets. Moreover, state governments, as well as the
central government of India have recently sanctioned financial support
to farmers, private industries, and non-profit organizations engaging
in millet cultivation. Therefore, this crop provides an exceptional
medium to improve the iron status of the Indian population through
fortification and biofortification.
Numerous studies have pointed towards the success of iron
fortification and biofortification of millets in improving iron status.
An increased level of total iron absorption from biofortified pearl
millet flour by a factor of “two” was observed among Beninese
women [125]. It was noted that a higher phytic acid content in iron
biofortified millet did not affect the absorption of the excess iron.
In the same study, post-harvest fortification of millets with iron
demonstrated an increase in total iron absorption by approximately
three folds. Another study on iron-deficient Indian schoolchildren
between 12 and 16 years of age reported that iron absorption from
iron-biofortified pearl millet meals substantially exceeded the amount
of iron absorbed from non-fortified control millet meals [126].
During a period of six months, levels of serum ferritin and total body
iron significantly elevated within children with iron deficiency, and
increased the likelihood of becoming iron replete by 1.64 times.
Development of “Dhanshakti”, the first Indian pearl millet variant
with enhanced iron and zinc concentrations began in the early
2000s through 2012, when it was successfully launched in the state
of Maharashtra. To tackle nutritional deficiencies among the low-to middle
income Indian population, the ICAR has actively promoted
pearl millet biofortification through its “All India Coordinated
Research Project on Pearl Millet” (AICRP-Pearl Millet). In 2018, the
AICRP-Pearl Millet mandated a minimum requirement of 42 mg/Kg
iron concentration in new cultivars of biofortified millet, in order to
be qualified for trial approvals and subsequent release. Since then,
numerous iron and zinc biofortified varieties of pearl millet have been
developed and made available to farmers in India, among which the
“AHB 1269 Fe (MH 2185)” hybrid released in 2019 held the highest
concentration of iron at 91 mg/Kg [121]. ICAR has also focused on
biofortification of finger millets, which often serves as a great source
of calcium. In the year 2020 itself, the ICAR released three varieties of
high-iron finger millets: VR 929 (131.8 ppm Fe content), CFMV 1 (58
ppm Fe content), and CFMV 2 (39 ppm Fe content) [118].
Conclusion
India faces an array of multifaceted challenges in terms of tackling
anemia. A major hurdle remains the inability to diagnose early iron
deficiency, the prolongation of which causes IDA. A greater emphasis
on both CBC and serum iron parameters would not only promote early
detection of iron deficiency, but also the diagnosis of other drivers
of anemia. It has also been made evident that diagnostic guidelines
based on studies on western ethnic populations may not be wholly
representative of the Indian population, especially due to variations in
genetic predisposition to clinical conditions, and differing biomarker
cutoff values. Therefore, a reevaluation of biomarker cutoffs is
recommended to accurately represent the vastly diverse and complex
Indian population.
Certain interventional strategies such as dietary diversification
and pre-processing/cooking methods could be adapted at a household
level to maximize iron intake and its absorption. The majorly plant based
Indian diet presents a high proportion of iron absorption
inhibitors, the effects of which could be minimized through household
measures. The introduction of educational programs within the
large rural population of India could be beneficial towards raising
awareness about employable household interventions. Furthermore,
fortification and biofortification of a wider range of staple foods could
offer a cost-effective avenue for improving the iron status of the nation
as a whole. An objective of both the government and concerned
authorities should be to facilitate a higher degree of penetration of
such fortified/biofortified foods into markets and ultimately into the
diets of a wider pool of the population.
It is imperative that further population-based studies be conducted
in order to link anemia prevalence to specific dietary trends within
different segments of the population. Although, the inhibitory role of
tea towards iron absorption has been well documented, it is essential
to validate the results within an Indian cohort, which accounts for
the role of tea on Indian diets. Moreover, research initiatives towards
evaluating the efficacy of fortified and biofortified food within
population with high occurrence of infections and inflammation is
recommended. A greater emphasis on identifying the population’s
predisposition to IRIDA would enable formulation of appropriate
strategies to combat oral iron refractoriness. Additionally, it is a
crucial need of the hour for India to reconsider its stance on GM
crop cultivation, given the various advantages associated with its
development and cultivation. GM crop cultivation would not only
allow efficient biofortification, but also ensure food security for India’s
ever-growing population.