Progress in Fusor Plasma Simulations

This area is for ADVANCED theory and discussions only. If you just joined fusor.net, chances are this is NOT the area for you to be posting.
User avatar
Liam David
Posts: 509
Joined: Sat Jan 25, 2014 5:30 pm
Real name: Liam David
Location: Arizona

Re: Progress in Fusor Plasma Simulations

Post by Liam David »

I'm rewriting much of the plasma simulation to improve its accuracy and speed, and one of the modules that most needs updating is the electromagnetic (EM) field solver.

The previous EM solver wasn't a true EM solver. That is to say, it solved the electrostatic and magnetostatic Poisson equations using the relaxation and/or finite difference methods, situation depending.

poissonE.png
poissonE.png (1.58 KiB) Viewed 13176 times
poissonM.png
poissonM.png (5.09 KiB) Viewed 13176 times

These methods completely decouple the electric and magnetic fields, and give accurate solutions only when the fields' and source' rates of change are very slow (i.e. quasi-static). In addition, relaxation is ridiculously slow, and even using sparse matrices to solve an Ax=b finite difference scheme in >2D is extremely memory-intensive (many TB of RAM).

The new 3D solver uses the finite difference time domain (FDTD) method which fully couples the electric and magnetic fields, any charges and currents present, and all boundary conditions in a relativistic, self-consistent manner. There are still drawbacks, such as non-uniform propagation along diagonals when using a cartesian grid (visible in the videos) and the need to store previous iterations of the fields, but it is still much better than the other methods. Technically, it solves the electric potential and magnetic vector potential equations, converting them into the E and B fields in a post-processing step.

potentialEquations.png
potentialEquations.png (8.49 KiB) Viewed 13176 times

All that aside, here are some videos of a test simulation with the new EM solver. The domain is a conducting box at 0 V with two conducting hemispheres at +100 kV and -100 kV. All other boundaries are open. The potentials are applied instantly, simulating a step function response. Note the interference and reflection of the waves. The timestep is 2e-13 s and the domain is 256^3 cells spanning (84 mm)^3. Steady-state (i.e. the Poisson equation solution) is approached after some time. Since energy is lost only through the open boundaries, reaching steady-state takes a long time. Videos are of the YZ plane.

E Field: https://youtu.be/8IOmu2IlU3Y
Potential: https://youtu.be/erYUmWrbwc0

domain.png
Efield.png
User avatar
Nathan Marshall
Posts: 53
Joined: Wed May 08, 2019 8:13 pm
Real name: Nathan Marshall

Re: Progress in Fusor Plasma Simulations

Post by Nathan Marshall »

Very nice, Liam! I am curious how you implemented the boundary conditions. I have played around with 2D FDTD wave equations but always had issues when trying to implement open boundary conditions.
User avatar
Liam David
Posts: 509
Joined: Sat Jan 25, 2014 5:30 pm
Real name: Liam David
Location: Arizona

Re: Progress in Fusor Plasma Simulations

Post by Liam David »

Yeah, the open boundary conditions are the difficult part. I'm using equation 3.9 in the attached paper for each boundary, representing a wave traveling out of the boundary. I haven't yet completely verified my implementation, so I can't say much about its usefulness, but it seems to work alright. For the conductor boundaries, I just enforce the applied potentials which leads to the 100% reflections.

The other thing I have not implemented is the Lorenz gauge condition, which is required to get a physical solution since the potential wave equations are derived under its assumption. Since phi and A are decoupled, if I initialize some phi, it will never produce a B-field unless I initialize a valid A. It also calculates a non-physical E-field since it depends on A as well. Depending on how difficult the Lorenz gauge is to implement, I may backtrack to the coupled phi and A equations.
Attachments
ln_fdtd_1d.pdf
(60.09 KiB) Downloaded 455 times
User avatar
Nathan Marshall
Posts: 53
Joined: Wed May 08, 2019 8:13 pm
Real name: Nathan Marshall

Re: Progress in Fusor Plasma Simulations

Post by Nathan Marshall »

Wow, I implemented the same boundary condition into my wave equation solver and it worked like a charm! I was overthinking the problem and trying to implement perfectly matched layers from research literature which was too complicated. I didn't think that simply using a 1D open boundary condition would work so well but should have tried that first. I do notice some minor reflections on close inspection since these boundary conditions are not perfect in 2D, but it works well enough for my applications! Thanks for the suggestion.

Quick clip of single slit diffraction from my FDTD solver: https://youtube.com/shorts/kdysUlUlRHI?feature=share
Attachments
Screenshot from 2022-08-30 20-44-51.png
User avatar
Liam David
Posts: 509
Joined: Sat Jan 25, 2014 5:30 pm
Real name: Liam David
Location: Arizona

Re: Progress in Fusor Plasma Simulations

Post by Liam David »

Great to hear that it works for you too, and nice simulation! I also get minor reflections in 3D... something to work on but it's not a priority. I've got more significant errors elsewhere. I was also looking at perfectly matched layers, but like you say they're much more complicated and they also artificially increase the domain size.
User avatar
Liam David
Posts: 509
Joined: Sat Jan 25, 2014 5:30 pm
Real name: Liam David
Location: Arizona

Re: Progress in Fusor Plasma Simulations

Post by Liam David »

There are still some hiccups in the FDTD solver; namely, there's a spurious open boundary that is radiating inwards, but it doesn't affect the dynamics of the plasma which is confined mostly in the center of the domain. I'm probably combing over the GPU solver as you read this...

My latest challenge is one of different scales. Accurately capturing electron dynamics requires timesteps on the order of 1e-13 s. I need simulation durations on the order of hundreds of microseconds, which translates to billions of timesteps. Even with each timestep taking some tens of milliseconds, that translates to weeks of computation (note that each timestep is rather involved, requiring moving particles, computing molecular reactions, depositing charge and current onto the mesh, solving the FDTD equations, adjusting the cathode potential, and logging any important parameters). I need the long duration to limit the divergence of the current. If the background pressure is over the glow discharge threshold, the number of particles grows without bound, quickly surpassing the 20 million limit imposed by my 8 GB of GPU VRAM. Lowering the cathode potential increases the discharge pressure, but changing it unphysically fast generates massive EM waves that mess things up. Lots of work to do.

Deliberately asymmetric.
Deliberately asymmetric.
Pablo Llaguno
Posts: 104
Joined: Sun Feb 05, 2017 6:00 pm
Real name: Pablo Llaguno

Re: Progress in Fusor Plasma Simulations

Post by Pablo Llaguno »

Amazing work Liam.

When you solve the scalar and vector potentials wave (inhomogeneous) equations, I am confused as to why do you need to implement a Lorenz gauge condition. The physical electric and magnetic fields are independent of the gauge function (gauge invariance) and the usual treatment in textbooks is that if this gauge function can be found, then it is not necessary to solve it.

For the Coulomb gauge the gauge function is determined by a Poisson equation, with the divergence of the vector potential as the source. For the Lorenz gauge I can't remember, but my electrodynamics book (Griffiths) has a problem (10.6) that shows it is always possible to meet the Lorenz gauge, assuming you have solutions for the inhomogeneous wave functions for V and A. Wouldn't your numerical solutions for V and A make unnecessary implementing a Lorenz gauge function?
User avatar
Liam David
Posts: 509
Joined: Sat Jan 25, 2014 5:30 pm
Real name: Liam David
Location: Arizona

Re: Progress in Fusor Plasma Simulations

Post by Liam David »

Thanks, Pablo. Indeed, you are correct: FDTD solvers don't require you to pick a gauge. I decided to incorporate it (although nowhere do I explicitly solve it) for a few reasons.

1. It's relativistically valid. While I don't use relativistic particle pushing yet, it is relatively simple to implement.
2. It decouples the potential equations. This makes writing a finite difference scheme much simpler.
3. It is less numerically intensive as it requires slightly less memory access and has far fewer floating point operations.
4. Who doesn't like the symmetry?
Coupled equations
Coupled equations
Lorenz gauge
Lorenz gauge
lorenz.PNG (2.13 KiB) Viewed 12691 times
Decoupled potential equations
Decoupled potential equations
lorenz gauge.PNG (6.31 KiB) Viewed 12691 times
Pablo Llaguno
Posts: 104
Joined: Sun Feb 05, 2017 6:00 pm
Real name: Pablo Llaguno

Re: Progress in Fusor Plasma Simulations

Post by Pablo Llaguno »

Using the Lorenz gauge for this simulation is actually a very astute thing to do. However I am still confused with the reasoning behind this quote
Liam David wrote: Mon Aug 29, 2022 10:05 am The other thing I have not implemented is the Lorenz gauge condition, which is required to get a physical solution since the potential wave equations are derived under its assumption. Since phi and A are decoupled, if I initialize some phi, it will never produce a B-field unless I initialize a valid A. It also calculates a non-physical E-field since it depends on A as well. Depending on how difficult the Lorenz gauge is to implement, I may backtrack to the coupled phi and A equations.
How does initializing a phi (in the grid boundary conditions I imagine) affect the Lorenz gauge? Couldn't you initialize A such that the initial B is zero?

On another note, have you considered using Multiphysics simulators such as COMSOL? I did some FDTD simulations for an optical waveguide and while different from plasma simulations, it still solves Maxwell's equations. I guess it's much faster to do PIC simulations in MATLAB?
User avatar
Liam David
Posts: 509
Joined: Sat Jan 25, 2014 5:30 pm
Real name: Liam David
Location: Arizona

Re: Progress in Fusor Plasma Simulations

Post by Liam David »

Consider this example: I have two parallel plates that are instantaneously set to some potential difference at t=0. We know in advance that the resulting EM waves within the gap between the plates will have both E and B components due to the coupling in Maxwell's equations. There are no charges or currents in the gap, so the potential wave equations become homogenous--both the coupled ones and the decoupled ones.

Consider first the coupled equations (no applied gauge). We have a nonzero d(phi)/dt due to our initial conditions. The first coupled equation tells us that the time derivative of the divergence of A must be nonzero. Thus, except perhaps instantaneously, A cannot be zero. The 2nd coupled equation leads us to the same conclusion. Our algorithm, developed around finite difference stencils, would take this coupling into account and give us equations of the form
phi(n+1) = f (phi(n-1), A(n-1); phi(n-2), A(n-2); ...)
A(n+1) = g (phi(n-1), A(n-1); phi(n-2), A(n-2); ...)
Here n is the timestep index.

Now consider the decoupled equations (in the Lorenz gauge). Since the initial conditions for phi have no coupling to A, unless A is given an initial condition that satisfies the Lorenz gauge, not only will it be zero for all time, but the solution will be nonphysical essentially as constructed. The finite difference stencils give equations of the form
phi(n+1) = f' (phi(n-1); phi(n-2); ...)
A(n+1) = g' (A(n-1); A(n-2); ...)

Another issue can (and does) arise: the continuity equation. It is a corollary to Maxwell's equations and can thus be derived from them, and it must hold true for the simulation to be physical. It relates the change in charge density (rho) to the current density (J) and sort of (although incompletely) couples the decoupled phi and A equations. The discretization of the simulation can increasingly violate continuity over time, causing divergences in the fields. This qualitatively hasn't been an issue for me yet, but it is something I will eventually fix. One method is to define a fictitious field that effectively transports charge conservation errors out of the simulation domain at v > c.

Continuity equation
Continuity equation
continuity.PNG (1.58 KiB) Viewed 12506 times
User avatar
Liam David
Posts: 509
Joined: Sat Jan 25, 2014 5:30 pm
Real name: Liam David
Location: Arizona

Re: Progress in Fusor Plasma Simulations

Post by Liam David »

Some preliminary plots of electron+ion flux on the surfaces inside a standard cube fusor. Normalization is unimportant--I'm literally counting simulation particles here. The highest-flux region at the endcaps is ~1.2" in diameter, which closely matches the measured value (~1.1") within uncertainties. Upper plot linear color scale, lower plot log color scale.

collision map 2.png

collision map.png
User avatar
Liam David
Posts: 509
Joined: Sat Jan 25, 2014 5:30 pm
Real name: Liam David
Location: Arizona

Re: Progress in Fusor Plasma Simulations

Post by Liam David »

Really getting down to the limits of my hardware... I managed to eke out (roughly) another order of magnitude in computation speed by optimizing memory access, limiting CUDA kernel grid sizes, and, most importantly, sorting particles by their global cell index (i.e. the linear index in an n x m x p simulation grid). The latter allows the GPU to limit global memory access (latency of 100s of clocks); it can reuse data already loaded in the registers (single-clock latency). I also fixed a few bugs, notably one that didn't save/delete the correct particles in some rare cases.

To give a relative sense of the performance, the peak particle push rate for the state-of-the-art PIC code VPIC 2.0 on an Nvidia A100 GPU is 6 particles/nanosecond (https://arxiv.org/pdf/2102.13133, Figure 6). Notably, this metric includes only pushing the particles and not calculating collisions, advancing the fields, etc.. My RTX 3080, which is not optimized for computation, achieves 2.1 particles/nanosecond in this step. The total loop performance is 0.42 particles/ns. Moreover, my grid size is >16 million, while the VPIC grid sizes are much smaller.

I think the next improvements are physics upgrades, now that I have a pretty fast code.

efficiency.png
User avatar
Nicolas Krause
Posts: 230
Joined: Fri Sep 30, 2016 7:36 pm
Real name: Nicolas Krause
Location: Canada
Contact:

Re: Progress in Fusor Plasma Simulations

Post by Nicolas Krause »

Hi Liam,

I'm not too sure how much additional programming work it would be, but I don't know if you've heard of Lambda Labs? They have a GPU cloud with some pretty beefy hardware, and since you've already rewritten your program to use CUDA I can't imagine it would be too bad? Their prices seem pretty reasonable, around 2-4$/hr for some of their smaller GPU setups. If you wanted to scale some parameter like particles, rather than re-working a bunch of the math to some different model it might be worthwhile.
User avatar
Liam David
Posts: 509
Joined: Sat Jan 25, 2014 5:30 pm
Real name: Liam David
Location: Arizona

Re: Progress in Fusor Plasma Simulations

Post by Liam David »

I haven't heard of them before, but their prices seem very competitive. 8x A100s with 240 GB VRAM and 1.8 TB RAM is just $8.80/hr. My code and investigation are just not at that stage yet.
User avatar
Liam David
Posts: 509
Joined: Sat Jan 25, 2014 5:30 pm
Real name: Liam David
Location: Arizona

Re: Progress in Fusor Plasma Simulations

Post by Liam David »

It's been a while since I worked on the plasma simulation, and even longer since I've posted an update. Most of the design work for my next cube system is done, so until I get back to my hardware, research and simulations are again my priority. The goal is still to optimize my cathode geometry and relatively recent advances in manufacturing technologies have opened up many possibilities.

The previous version of my simulation has some drawbacks, including the lack of a relativistic particle pusher for relativistic electrons, some dubious handling of secondary electrons and particle reflections, and most importantly, a particle-neutral collision model that did not properly conserve energy and momentum.

I've updated the particle pusher to use the relativistic Boris algorithm and validated it using known analytic results from energy conservation and particle drift physics. This was the easiest change as I only had to add a few gammas.

The boundary handling module now properly handles particle reflections by changing only the velocity direction along the surface normal. Secondary electrons are emitted normal to surfaces instead of in the opposite direction of incident particles. Still to be fixed is how many electrons I generate: currently, for a mean yield of e.g. 2.4, I generate 2 electrons and then a 3rd with probability 0.4. This isn't right for obvious reasons. Also, the collision detection algorithm was updated to fix bugs in edge cases and to allow for macroparticle radii that are different than the cell size.

The collision model has been completely overhauled and now uses corrected impulse updates for two-body collisions. For collisions with >2 products, I partition the energy and momentum by performing additional pairwise collisions, thereby guaranteeing that they will be conserved. I'm currently working on validation as well as energy loss due to ionizations (inelastic collisions). I plan to add a variety of excited species, e.g. D2* and D*, which should improve the discharge physics and allow me to estimate light output.

Also planned, and what will probably be the most difficult upgrade, is a full ion-ion, ion-electron, and electron-electron collision model.

In the meantime, here's an image of the particle cascade generated by a 5.2keV D2+ particle into deuterium gas:

collision.png

I intend to write a robust suite of test cases with known solutions before putting too much stock in any fusor-related simulations. More to follow, grad school permitting.
Frank Sanns
Site Admin
Posts: 2117
Joined: Fri Jun 14, 2002 2:26 pm
Real name: Frank Sanns

Re: Progress in Fusor Plasma Simulations

Post by Frank Sanns »

Nice work.

You might consider that the mean free path of electrons in the Fusor are on the order of a km if I recall from past calculations. They will not hit many neutrals. This means that it is the ions that do the ionization. Simplified models of the major factors are complicated enough without introducing minor factors.
Achiever's madness; when enough is still not enough. ---FS
We have to stop looking at the world through our physical eyes. The universe is NOT what we see. It is the quantum world that is real. The rest is just an electron illusion. ---FS
User avatar
Liam David
Posts: 509
Joined: Sat Jan 25, 2014 5:30 pm
Real name: Liam David
Location: Arizona

Re: Progress in Fusor Plasma Simulations

Post by Liam David »

I haven't checked the exact numbers recently but I agree that the electron mean free path at fusor energies is very long. Ions and fast neutrals generate the vast majority of additional ionizations. That said, I claim that the vast majority of electrons in a fusor are nowhere near full energy, especially inside the cathode where they are vital for proper sheath physics, space charge, and ionization. That stuff, unfortunately for computation time, must stay in.
Post Reply

Return to “Advanced Technical Discussion Area”