Short-Term Shockwaves: Rethinking Inflation, Method, and LLMs in Research
The following write-up refers to my research available on https://www.preprints.org/manuscript/202504.1067/v1. For full transparency and reproducibility, the transcripts of my conversations with the LLM model for the experiment referenced in the article are available here: https://drive.google.com/drive/folders/1IrE7br-QpN8pG2QSIlWmry-saWIcgeqp
---------------
Methodological Design
In early 2025, I developed a model to examine the impact of inflation on different income groups in the United States. I expected the results to point in a clear direction, but instead, they raised new questions. All statistical checks were sound: no multicollinearity, and diagnostics were stable. However, I was working with just 12 months of data, which ended in February 2025. That wasn’t a flaw—it was intentional. I wanted to look at the short term.
Why? Because in the world of economic lived experience, the short term is where the pressure is most visible. It’s the space where prices spike suddenly, behaviors shift quickly, and policy response—or silence—can dramatically affect household resilience. Especially in a post-COVID economy, short-term volatility reveals more about real-time distress than multi-year averages ever could.
That realization prompted a shift in focus. Rather than treat inflation as the headline act, I used it as a testing ground for a broader question: Can we derive meaningful insights from limited, short-term data? Can qualitative and survey-based methods fill in what the numbers miss? And perhaps most importantly, does the order in which we analyze quantitative and qualitative data affect the insights we walk away with?
Findings: Income and Inflation
I ran multiple and quantile regressions to assess how inflation was hitting various income brackets. Nominal incomes appeared to rise—likely a result of wage adjustments—but real earnings lagged. Interestingly, higher debt-to-income households also reported higher earnings, which suggested many were using credit to maintain their standard of living. I also found that increased spending was often associated with lower earnings, a clear sign of short-term financial strain. Even as unemployment inched upward, income figures didn’t budge much—possibly because job losses were concentrated in low-wage sectors. This aligns with recent analyses showing that pandemic-era inflation has disproportionately affected lower-income groups and amplified existing income disparities (Jayashankar & Murphy, 2023)
Qualitative Gaps
To add depth, I examined reports from institutions like the IMF, World Bank, and OECD. These revealed a policy shift toward fiscal tools, but more importantly, they showed that trust and confidence—especially among lower-income groups—now play a decisive role in shaping recovery. Economic inequality is no longer a footnote; it’s central to how fragile or resilient a recovery is likely to be. These observations echo wider findings from international research on how past and present pandemics have lasting effects on inequality and fiscal resilience (Sintos, 2023).
The Human Perspective
To see what the numbers could not capture, I ran two small-scale surveys: one with closed-ended questions and another open-ended. Each had approximately 50 respondents, recruited via snowball sampling through professional networks and social media. Participants were US-based adults across diverse income brackets, with a mix of salaried, freelance, and unemployed individuals. The use of small-scale surveys was deliberate, as they are particularly well-suited to uncovering emerging trends, routine adaptations, and the subjective ways individuals interpret economic change—subtleties that are often overlooked in large-scale data (Small, 2009; Crouch & McKenzie, 2006).
These surveys added color to the regression output. Middle-income households, often lumped in with lower-income groups in broader studies, emerged as uniquely vulnerable. About 40% reported a 10–20% rise in expenses, and 80% felt their salaries weren’t keeping up. Their open responses mentioned cutting back, taking side gigs, and delaying major purchases. Lower-income participants described more severe hardship—skipping meals, avoiding medical care, and surviving on credit.
Insights from LLM Sequencing
Here’s where it got even more interesting. I wanted to test whether the sequence of analysis influenced the interpretation. In macro-level research, should we begin with the numbers and then turn to narratives, or reverse the order? And with surveys, does starting with closed responses versus open-ended ones affect the overall picture?
So, I tested the hypothesis using a large language model (LLM). The use of LLMs to explore methodological decisions is still emerging, but this kind of application points to their potential role in shaping—not just supporting—research design (Knicker et al., 2024). I gave it the same mixed-methods data and prompted it to interpret findings in both orders: quantitative to qualitative and vice versa for macroeconomic insights, and closed-ended to open-ended and vice versa for survey analysis. The LLM was prompted using neutral, instruction-based prompts with identical data segments across conditions, and I removed all labeling or sequencing hints to prevent bias. Each combination was run multiple times (at least three) to check for consistency in interpretation patterns.
For example, a typical prompt was as follows: “Here are two sets of data. Set A…Set B…Please share a combined interpretation of both sets together after you analyze them in the following sequence: Set A>Set B. The context is inflation's impact on different income classes. Share your output in 5-10 bullet points.” For full transparency, all GPT conversations and outputs are available online: Set 1: Macro Analysis – Quantitative to Qualitative, Set 2: Macro Analysis – Qualitative to Quantitative, Set 1: Survey Analysis – Closed to open-ended, and Set 2: Survey Analysis – Open- closed-ended.
The results? Fascinating.
When starting with quantitative findings, the LLM consistently generated more integrated, holistic interpretations. For example:
“Additionally, Analysis 2’s findings on political unrest complement Analysis 1’s insights on the financial strain, reinforcing the argument that inflation is not merely an economic issue but a broader societal challenge…”
In the quantitative→qualitative sequence, the LLM wove together empirical and sociopolitical insights, creating a more cohesive narrative. In contrast, when starting with qualitative data, the integration was less cohesive:
“Analysis 1 discusses inflation’s role in rising inequality, social unrest, and housing disparities—themes not explicitly addressed in Analysis 2 but indirectly supported by its findings…”
Here, links between themes were acknowledged but felt secondary, more like a side note than a central point.
A similar pattern emerged in the survey sequences. The closed→open combination yielded more layered reasoning. For instance:
“Lower-income respondents favor direct interventions such as wage increases and rent controls, treating inflation as a crisis requiring immediate relief. Middle-income respondents lean toward market-based solutions like tax cuts and corporate regulation, perceiving inflation as a systemic issue that needs structural fixes.”
Compare that to the open→closed version:
“Lower-income respondents favor direct interventions… while middle-income respondents prefer market-based solutions…”
The second version reported the preferences but lacked the nuanced “why” behind them. In both macro and micro contexts, starting with structured data gave the LLM a more stable foundation for interpretation, allowing qualitative insights to enhance rather than overtake the narrative.
Implications for Mixed-Methods Research
Ultimately, this experiment underscored something subtle but powerful: in mixed-methods research, the sequence isn’t just a technical choice— it actively shapes what we notice, prioritize, and conclude. For academic institutions, this suggests a need to re-examine how research training and peer review treat method sequencing—not as a procedural afterthought, but as a design choice with analytical consequences. For government agencies and policy think tanks, especially those responding to crises with limited data, it highlights the importance of methodological flexibility: sequencing decisions can accelerate or delay how quickly actionable insights surface. Whether the context is inflation, climate risk, or public health, integrating structured and narrative data in a purposeful order can make the difference between a reactive policy and a proactive one. Recognizing the hidden weight of sequencing may be the key to making our research—and our responses—more timely, equitable, and attuned to real-world complexity.