I'm working on my first first-author publication. It's mostly done; I'm waiting for someone to run a simulation for me, and he's really busy. Once that's completed, I'll be able to write the last section and submit it to my PI for review.
One aspect of this, though, is that I've been doing deep-dive analyses on several electric propulsion publications relevant to the topic - and all of them have been less-than-stellar (no pun intended). I have no idea if this is "normal", per se, but it's a little frustrating.
Obviously, I don't want to go into too many details for a variety of reasons.
The first paper was a derivation of a mathematical one-dimensional model for something. It itself relied on calculations done about 10 years prior in another publication - and *that* older publication had some basic math errors in it. I wouldn't have known except that it looked a little weird in the more current presentation, so I wanted to go through the effort of doing the full derivation myself (neither paper had all the details, or even most of them). That effort led to realizing that the original author made some pretty silly math errors that the later authors retained/duplicated. Oddly, the final result is correct, but it's only so because they make some unsupported/unjustified simplifying assumptions that then (seemingly coincidentally) remove the erroneous terms. I'm wondering if it's really coincidence or if the authors realized something was off in the math and just hand-waved away the corrections.
The second paper was one recently sent to a conference, so I can't get too upset as it's not been peer-reviewed. That being said, a lot of EP publications are conference papers, and the quality of conference papers is usually pretty decent. In this case, they were commenting on a specific set of physics that happens in vacuum chambers, explicitly under what we call free molecular flow.
---BEGIN TL;DR BLOCK---
In basic terms, a gas is a loose collection of particles within a volume. At ordinary pressures, the density of particles in the gas is high enough that they collide all the time and bounce off each other. Because of this, the gas is termed "collisional" (which just means that the particles collide); this has a lot of implications. Because of the collisions, a gas will naturally diffuse and spread its particles evenly within a volume, for example: if there is a higher density in one area (such as because you're flowing gas into the volume at that area, like air flowing out of a vent into a room), then the particles will naturally self-diffuse and spread out into the whole volume (room). This is just one effect of collisionality, but it's an important one. It means we can treat the gas as a flowing fluid, a contiguous, continuous stream rather than a bunch of individual particles.
If the gas density is low enough, though, then there aren't enough gas particles to interact. In a room, say, any particle entering the room is more likely to cross the entire room without hitting another particle. For orders of magnitude reference, a standard room is at 1 atm; at that point, gas is collisional and treated as a fluid. If we drop down to, say, one millionth of an atmosphere - 0.000001 atm - we've got one millionth of the density we had; now there are few enough particles that they don't actually interact much with each other. The standard densities we deal with are closer to one billionth of an atmosphere (usually measured in micro-torr).
Gas at such low densities is non-collisional and is referred to as "free molecular flow" - restated, we have to treat it as individual molecules or particles and not as a bulk gas because it doesn't behave in any bulk method anymore.
---END TL;DR BLOCK---
TL;DR: free molecular flow means that the normal concepts of "pressure", "diffusion", etc. that we associate with a gas no longer apply due to the lower gas density. This is the regime that most vacuum chambers operate in when testing thrusters (since we're trying to reproduce spacelike vacuum conditions).
This paper, which is explicitly focusing on a free molecular flow situation, repeatedly and explicitly invokes pressure and diffusion as explanations for behavior. It's a gross failure of even basic understanding of the physics of the situation, and I'm horrified that kind of error in basic physics made it into even a conference paper; like, this is the kind of thing you'd fail someone for in a basic fluids class as an undergrad. The papers *overall* findings are interesting and still relevant, but their justifications and explanations are completely flawed by this mischaracterization. It's even worse that the last author on the paper is someone I know who graduated with a PhD from our lab.
Like, dude, you know better than this. I *know* you know better than this.
The last paper is a more complicated thing. One of the other universities we work with gave us a data set to use for validation of some models, including the note that some values needed "a correction factor", but that's completely normal for this kind of measurement. Except we couldn't get our models anywhere close to the values in the paper, to the point where we were wondering if we were doing something wrong in our models.
I started looking into it and found the original conference paper the data was published under as well as the follow-up peer-reviewed publication of the same content. The conference paper has the data values we have in the file, but it also includes a statement that they were off by a multiple with no real statement why. The official publication has *entirely new values*, where the original values (in the datafile and the conference publication) are universally multiplied by a constant and then presented. No explanation for the change in values is given, and the constant doesn't match any correction factor we've ever seen for this kind of data. I've emailed one of the authors (who sent us the data) asking for clarification (he's a friend, so it's basically: "WTF man...?"), but no response yet. For now, we're just ignoring the datafile and the conference paper and going with the publication, because 1) that's the official report and 2) it actually lines up with what our models show (which is a relief).
(My postdoc is thrilled, btw, that it turns out we discovered the error in their data due to the fact that our model was right and the data was wrong.)
So, each of the three is maybe something small, something unimportant in the bigger scheme. But there are three major aspects of research: underlying theory, mathematical models, and experimental data. The point is to get all three to match. I've now got three (sets of) papers where each of them screws up one of those areas.
My PI mentioned a few months ago that I was the only one in the lab "doing actual physics" because I'm the only one who is trying to merge theory and models with experimental data; most of the others are either using ML/AI techniques (which are inherently statistical and don't care about physics) or just pure experimentation. Now I'm wondering how many other labs work the same way, with only 1/10th (or less) of the lab actually caring about the physics and math. It's kind of depressing.