# A Finite Difference Frequency Domain Solver in Julia

There are many commercial solutions to solving Maxwell’s Equations for complex electromagnetics problems. This general purpose approach is great for commercial solvers, but doesn’t easily lend itself to specific problems for which a solver could be tailored for. There are a few open-source electromagnetics solvers, but the powerful of which are written in syntactically dense C or C++. For this project, I wanted to explore the simulation of a frequency-selective device, or FSS. Specifically, a binary-diffraction grating for mm-wave. These type of devices are well suited for simulation in the frequency domain. For the time being, I chose to stick with the traditional finite-difference method of solving the PDEs. So, presented in this text is a finite difference frequency domain solver using the modern programming language Julia. The device I am simulating is generalized to a 2D solution with periodic boundary conditions but could be easily extended to 3D.

## What is Julia

In the realm of scientific programming, C and FORTRAN has long reigned supreme. These languages, while known for their speed, are not however known for their ease of use or approachable syntax. More “modern” languages like Python or R (to a degree) have attempted to fix the latter of the issues by purposefully compromising some performance for accessibility. This then leaves two camps of programmers, those who can’t sacrifice speed and program in something archaic and those who want to be more modern. This is called the “two language problem” – the exact problem the Julia language is trying to solve.

Julia is a high-level, general purpose dynamic programming language that “looks like Python and runs like C”. It is meant to provide the performance of low level languages like C and FORTRAN with the readability and expressiveness of Python. Python itself has tried to add performance through packages like numpy, by sacrifice syntax.

In many cases, Julia is just as fast as C, as seen in these benchmarks. Due to its performance, many scientific computing communities have embraced Julia for high-performance projects. The astronomy project Celeste.jl that characterizes stars brought Julia into the “petaflop club” by running at over a petaflop per second in computation.

As of August of 2018, Julia is in its first general release making now the perfect time to start working on large-scale projects. This release marks the first long-term state of the language in terms of syntax, style, and packages.

All of this is why I have chosen to write the code to solve the finite difference frequency domain problem in Julia. I believe with Julia, this code can be fast, efficient, portable, readable, and extensible. I am excited to continue work with Julia and I hope to contribute to the Julia project through my time working in it.

## The Problem Setup

Solving electromagnetics problems is primarily a partial differential equations problem.

\begin{align*}

\nabla \times \vec{E} &= -j \omega \vec{B} & \nabla \cdot \vec{D} &= \rho_v \\

\nabla \times \vec{H} &= \vec{J} + j \omega \vec{D} & \nabla \cdot \vec{B} &= 0 \\

\vec{D} &= \epsilon \vec{E} & \vec{B} &= \mu \vec{H}

\end{align*}

Simple problems like dipole antennas and dielectric spheres can be solved analytically, but complex geometry and large-scale problems require numerical methods to solve. Lucky for us, discrete problems can be boiled down to linear algebra problems – for which we have a large arsenal of techniques to utilize.

### Linear Operations as Matrices

For our case, we are looking at the differential form of Maxwell’s Equations. The linear operation of interest is of course the derivative. Consider the following:

\begin{align*}

\frac{1}{2\Delta x}

\begin{bmatrix}

0 & 1 & 0 & 0 & \dots & 0 \\

-1 & 0 & 1 & 0 & \dots & 0 \\

0 & -1 & 0 & 1 & \dots & 0 \\

\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\

0 & 0 & 0 & 0 & \dots & 0

\end{bmatrix}

\begin{bmatrix}

f_1 \\

f_2 \\

f_3 \\

\vdots \\

f_n

\end{bmatrix}

=

\frac{1}{2\Delta x}

\begin{bmatrix}

f_2-f_0\\

f_3-f_1 \\

f_4-f_2 \\

\vdots \\

f_{n+1}-f_{n-1}

\end{bmatrix}

\end{align*}

Performing the matrix multiplication yields a vector of an approximation of a partial derivative of f(x).

\begin{equation}

\frac{\delta}{\delta x} f(x) \rightarrow \pmb{D}_1^x f

\end{equation}

As the x spacing approaches 0, this operation approaches a perfect derivative. Because the slope is being taken at the center point of the two samples, this is a central finite difference.

One could readily write these matrices for any linear function. Fourier transforms, laplace transforms, integrals, etc. can all be approximated as a linear sum of known point of that function.

\begin{align*}

\pmb{L}\{f\} &\cong \sum_i f_ia_i

\end{align*}

Constructing a complex function is then just the left-multiplication of all of these function matrices in their order of operations. Take for example the second order partial derivative. From calculus, one could simply derive twice. This is just the multiplication of two derivative matrices.

\begin{equation}

\pmb{D}_2^xf=\pmb{D}_1^x \pmb{D}_1^xf

\end{equation}

What makes all of this cool is that we can reduce the entire solution space to a single problem

### The Ax=b Problem

Take an arbitrary partial differential equation

\begin{align*}

a(x)\frac{\delta^2}{\delta x^2}f(x) + \gamma b(x) \frac{\delta}{\delta x}f(x) + c(x)f(x) &= g(x)

\end{align*}

Term by term, the equation can be re-written in a discrete system using matrices.

\begin{align*}

[A][D^1_x][D^1_x][f] + \gamma [B] [D^1_x][f] + [C][f] &= [g]

\end{align*}

Combining like terms, we can construct an operator matrix **L **and simplify the problem.

\begin{align*}

[L] &= [A][D^1_x][D^1_x] + \gamma [B] [D^1_x] + [C]

\end{align*}

The problem then becomes

\begin{align*}

[L][f] &= [g]

\end{align*}

#### Solving Systems in Julia

As before, Julia is meant to be a solution to fast computation with an approachable syntax. Looking at Ax=b is the perfect introduction to working with matrices in Julia.

Let’s define a matrix **A**

\begin{bmatrix}

1 & 2 & 3\\

4 & 5 & 6 \\

7 & 8 & 0

\end{bmatrix}

In Julia, we would write this as the following

```
julia> A = [1 2 3;4 5 6; 7 8 0]
3×3 Array{Int64,2}:
1 2 3
4 5 6
7 8 0
```

The syntax is similar to Matlab, and the result is easily understood. A is the matrix with those given entries. After we assign A to those values, the REPL returns A. Returning A prints what A is; in this case, A is a 3×3 Array composed of 2 dimensional 64 bit integers.

Making a source vector b is the same procedure

```
julia> b = [1;2;3]
3-element Array{Int64,1}:
1
2
3
```

Finally, just as before, we multiply b by the inverse of A to solve for x

```
julia> x = A^-1*b
3-element Array{Float64,1}:
-0.3333333333333333
0.6666666666666666
0.0
```

Simple stuff. We’ll come back to this later when we start to construct the problem for FDFD.

### Boundary Conditions

We left out one major talking point when discussing the finite difference method with matrices. In our derivative matrix, we included points for our function that were not in the problem space. Namely, f(0) and f(n+1). This is the boundary condition problem. As we want as many solution points as we have problem points, we have to make an assumption about how the function behaves outside the solution space. This would normally involve a long discussion about all of the different boundary conditions for PDEs, but for this implementation, we only focus on the Dirichlet boundary condition and the Floquet-Bloch boundary condition.

#### Dirichlet Boundary Condition

The Dirichlet condition is the most simple. In essence, for an ODE

\begin{equation}

y”+y’=0

\end{equation}

Or a PDE

\begin{equation}

\nabla^2y+y=0

\end{equation}

On the interval [a,b]

\begin{equation}

y(a) = \alpha \\

y(b) = \beta

\end{equation}

This allows us to directly assign numbers to the boundary conditions. To even further simplify this for our case, we will set alpha and beta to 0.

#### Floquet-Bloch Boundary Condition

The second boundary condition we will use is the Floquet-Bloch condition. This condition is used when modeling periodic devices like lattice structures, frequency-selective surfaces, and diffraction gratings. We can simulate this periodicity by equating the solution at the edges, say d(f_n) = d(f_1). Bloch’s theorem states that “Waves in a periodic structure take on the same symmetry and periodicity as the

structure itself.”

\begin{equation}

\vec{E}(\vec{r}) = \vec{A}(\vec{r})e^{j\vec{\beta}\cdot \vec{r}}

\end{equation}

Where the overall E field equals some periodic envelope that is of same periodicity as the device A and the plane wave phase tilt.

This gives a problem, though. The phase of the solution at the boundary isn’t necessarily consistent. We can adjust this by acknowledging the fact that periodic solutions have the same periodicity as the device itself.

Consider the following

\begin{align*}

E_1 &= A_1e^{j\beta x}\\

E_n &= A_ne^{j\beta x}

\end{align*}

But from Bloch we know the A terms are equal. so we can solve for the A

\begin{align*}

A_1 &= E_1^{-j\beta x_1} \\

E_n &= (E_1^{-j\beta x_1})e^{j\beta x_n} \\

E_n &= E_1e^{j\beta (x_n-x_1)}

\end{align*}

As xn-x1 for this is the size of the space in x, we can denote it as a Lambda x. Finally

\begin{equation}

E_{n} = E_{1}e^{j\beta\Lambda_x}

\end{equation}

### Frequency-Domain Maxwell’s Equations as Matrices and the Yee Grid

Given a discrete solution space as defined by matrices of material properties ϵ and μ, we need a method of defining electric and magnetic fields in this space. Using the “Yee Cube” [1] as shown below, we can define an electric and magnetic field for each basis of the cube.

Using this cube to discretize our space allows us to easily formulate boundary conditions as well as approximate curl equations. Additionally, we can retain material tensors for each basis of the Yee Grid to allow for anisotropicity.

There are few reasons why we use a grid of Yee cubes in a “Yee Grid”

1. It’s divergence free

2. The boundary conditions make a lot of sense

3. We can write decent curl approximations

This last one is the most important as we use the curl equations in solving Maxwell’s Equations

Formulating Maxwell’s equations with finite-difference is then rather straightforward. We add simplifications by normalizing with the wave number at the frequency of interest to absorb it into the spatial derivative.

\begin{align*}

x’ &= k_0x \\

y’ &= k_0y \\

z’ &= k_0z

\end{align*}

We can then easily write Maxwell’s Equations as finite difference equations

\begin{align*}

\frac{\delta E_z}{\delta y’} – \frac{\delta E_y}{\delta z’} &= \mu_{xx}\tilde{H}_x &

\frac{\delta E_z^{i,j+1,k} – \delta E_z^{i,j,k}}{\delta y’} – \frac{\delta E_y^{i,j,k+1} – \delta E_y^{i,j,k}}{\delta z’} &= \mu_{xx}^{i,j,k}\tilde{H}_x^{i,j,k} \\

\frac{\delta E_x}{\delta z’} – \frac{\delta E_z}{\delta x’} &= \mu_{yy}\tilde{H}_y &

\frac{\delta E_x^{i,j+1,k} – \delta E_x^{i,j,k}}{\delta z’} – \frac{\delta E_z^{i,j,k+1} – \delta E_z^{i,j,k}}{\delta x’} &= \mu_{yy}^{i,j,k}\tilde{H}_y^{i,j,k} \\

\frac{\delta E_y}{\delta x’} – \frac{\delta E_x}{\delta y’} &= \mu_{zz}\tilde{H}_z &

\frac{\delta E_y^{i,j+1,k} – \delta E_y^{i,j,k}}{\delta x’} – \frac{\delta E_x^{i,j,k+1} – \delta E_x^{i,j,k}}{\delta y} &= \mu_{zz}^{i,j,k}\tilde{H}_z^{i,j,k} \\ \\

\frac{\delta H_z}{\delta y’} – \frac{\delta H_y}{\delta z’} &= \epsilon_{xx}\tilde{E}_x &

\frac{\delta H_z^{i,j+1,k} – \delta H_z^{i,j,k}}{\delta y’} – \frac{\delta H_y^{i,j,k+1} – \delta H_y^{i,j,k}}{\delta z’} &= \epsilon_{xx}^{i,j,k}\tilde{E}_x^{i,j,k} \\

\frac{\delta H_x}{\delta z’} – \frac{\delta H_z}{\delta x’} &= \epsilon_{yy}\tilde{E}_y &

\frac{\delta H_x^{i,j+1,k} – \delta H_x^{i,j,k}}{\delta z’} – \frac{\delta H_z^{i,j,k+1} – \delta H_z^{i,j,k}}{\delta x’} &= \epsilon_{yy}^{i,j,k}\tilde{E}_y^{i,j,k} \\

\frac{\delta H_y}{\delta x’} – \frac{\delta H_x}{\delta y’} &= \epsilon_{zz}\tilde{E}_z &

\frac{\delta H_y^{i,j+1,k} – \delta H_y^{i,j,k}}{\delta x’} – \frac{\delta H_x^{i,j,k+1} – \delta H_x^{i,j,k}}{\delta y} &= \epsilon_{zz}^{i,j,k}\tilde{E}_z^{i,j,k} \\ \\

\end{align*}

Doing this in Julia is rather straightforward. We create diagonal entries just like in the example of the 2nd order finite difference from the first section. Because these matrices become rather large, they are stored as sparse matrices.

This function yee_grid_derivate takes a size of the grid, the spatial resolution, and the boundary conditions for the edges and returns the derivative matrices for E and H in X and Y. The comments in the code explain more.

```
function yee_grid_derivative(grid_size,grid_resolution,boundary_condition::Tuple{BC,BC},k_inc = [0 0])
# Generates the Yee Grid Derivative on the 2D Grid #FIXME for 3D
# To create a sparse array, we have to make I a vector of row indices,
# J a vector of column indices, and V a vector of values
# We do this to create our large derivative array for the two boundary condition cases
# For the case when we supply an incident wave vector but the boundary is Dirichlet,
# we need to set the k_inc back to zero
if boundary_condition[1] == Dirichlet
k_inc[1] = 0
end
if boundary_condition[2] == Dirichlet
k_inc[2] = 0
end
# Storing results in a sparse array of row index I, column index J, and value V
_I = Int64[] # I is reserved for the UniformScaling constant for Identity
J = Int64[]
V = Complex{Float64}[]
DEX = nothing
DEY = nothing
# Check if Nx = 1
if grid_size[1] == 1
DEX = 1.0im*k_inc[1]*sparse(I,grid_size[1]*grid_size[2],grid_size[1]*grid_size[2])
else
# Every element is multiplied by 1/Δx
entry_x = 1/grid_resolution[1]
for i = 1:grid_size[1]*grid_size[2]
# Main Diagonal, all -1*1/ΔX
push!(_I,i)
push!(J,i)
push!(V,-1*entry_x)
end
for i = 1:grid_size[1]*grid_size[2]-1
# Secondary Diagonal, above first diagonal, all 1*1/ΔX
push!(_I,i)
push!(J,i+1)
push!(V,entry_x)
end
# Create sparse matrix for DEX
DEX = sparse(_I,J,V)
if boundary_condition[1] == Dirichlet
# Every Nx element on secondary diagonal is 0
# This makes sense as the value outside the solution space is forced
# to zero
for i = grid_size[1]:grid_size[1]:grid_size[1]*grid_size[2]-1
DEX[i,i+1] = 0
end
end
if boundary_condition[1] == Periodic
# Calculate entry from Bloch's theorem
Λ_x = grid_size[1] * grid_resolution[1]
entry_periodic_x = exp(1.0im*k_inc[1]*Λ_x)
# Place entry at boundary
for i = grid_size[1]:grid_size[1]:grid_size[1]*grid_size[2]
DEX[i,i-grid_size[1]+1] = entry_periodic_x*entry_x
if i+1<= grid_size[1]*grid_size[2]
DEX[i,i+1] = 0
end
end
end
end
# Clear Variables
_I = Int64[]
J = Int64[]
V = Complex{Float64}[]
# Check if Ny = 1
if grid_size[2] == 1
DEY = 1.0im*k_inc[2]*sparse(I,grid_size[1]*grid_size[2],grid_size[1]*grid_size[2])
else
# Every element is multiplied by 1/Δy
entry_y = 1/grid_resolution[2]
for i = 1:grid_size[1]*grid_size[2]
# Main Diagonal
push!(_I,i)
push!(J,i)
push!(V,-entry_y)
end
for i = 1:grid_size[1]*grid_size[2]-grid_size[1]
# Secondary Diagonal
push!(_I,i)
push!(J,i+grid_size[1]) # The diagonal is located Nx past the main diagonal
push!(V,entry_y)
end
DEY = sparse(_I,J,V)
# There is no manifestation of the Dirichlet boundary condition in the Dy matrix
if boundary_condition[2] == Periodic
# For the periodic boundary condition in y, we only place elements when the boundary exists
# This forms another diagonal, starting in column 1 after the roll over from the second diagonal
# This roll over occurs when row Nx*Ny-Nx hits the edge of the matrix starting from Nx
# This will only happen when the number of rows is greater than Ny
# Or in other words, when Nx > 1
# There will only be Nx entries then
if grid_size[1] > 1
# Bloch theorem entry
Λ_y = grid_size[2] * grid_resolution[2]
entry_periodic_y = exp(1.0im*k_inc[2]*Λ_y)
# Place at boundary
for i = 1:grid_size[1]
DEY[i+grid_size[1]*grid_size[2]-grid_size[1],i] = entry_periodic_y*entry_y
end
end
end
end
# Using copy to eagerly evaluate the transpose
return DEX,DEY,copy(-transpose(DEX)),copy(-transpose(DEY))
end
```

### Constructing Perfectly-Matched Layer

As shown in the previous result, we retained diagonally anisotropic material properties. This allows us to construct uniaxial perfectly matched layers to model absorbers in our problem space. We do this by modifying our grid of material properties to include a loss term with

\begin{equation}

\tilde{\epsilon}_r = \epsilon_r’+j\epsilon_r”

\end{equation}

By adjusting the first term to match the impedance and the second term to adjust attenuation, we can construct almost perfect absorbing material. This process is described in [2].

To prevent reflections at the boundary of the PML, we must match to the impedance of the space directly before it.

\begin{equation}

\eta = \sqrt{\frac{\mu}{\epsilon}}

\end{equation}

We can meet this condition with

\begin{equation}

[\mu_r] = [\epsilon_r] =

\begin{bmatrix}

a & 0 & 0 \\

0 & b & 0 \\

0 & 0 & c

\end{bmatrix}

\end{equation}

If we force the square root of bc = 1, the reflection coefficients reduce to no longer being a function of angle.

\begin{equation}

r_{Mode} = \frac{\pm\sqrt{a}-\sqrt{b}}{\sqrt{a}+\sqrt{b}}

\end{equation}

If then we set a = b, there is no longer any reflection regardless of angle of incidence, polarization, or frequency. So then, we can write a new matrix **S** that encapsulates those new material properties.

\begin{equation}

[S_z] =

\begin{bmatrix}

s_z & 0 & 0 \\

0 & s_z & 0 \\

0 & 0 & \frac{1}{s_z}

\end{bmatrix}

\end{equation}

Recall that these s_z terms are still just relative material properties that just so happen to follow the condition that a=b and (ab)^1/2 = 1.

\begin{equation}

s_z = \alpha + j\beta

\end{equation}

We repeat this procedure for all axes and get similar results. We can combine the results into a single material tensor quality.

\begin{equation}

[S] = [S_x][S_y][S_z] =

\begin{bmatrix}

\frac{s_ys_z}{s_x} & 0 & 0 \\

0 & \frac{s_xs_z}{s_y} & 0 \\

0 & 0 & \frac{s_xs_y}{s_z}

\end{bmatrix}

\end{equation}

#### Implementing the PML

Following formulations from [2], we can then write a function that calculates these s terms for a given size of the PML and the grid. These solution satisfy the previous conditions with

\begin{align*}

s_x(x) &= a_x(x)[1+j\eta_0\sigma_x'(x)] \\

a_x(x) &= 1 + a_{Max}(x/L_x)^p \\

\sigma_x'(x) &= \sigma_{Max}’\sin{\frac{\pi x}{2L_x}}^2

\end{align*}

The maximum values are adjusted to minimize reflections. Typical values put sigma about 1, p between 3 and 5, and a between 0 and 5.

And here is the function to perform those calculations

```
function calculate_PML_2D(grid_size, PML_size)
# Creates a Uniaxial PML layer that matches in size of the solution space grid_size
# at the boundaries defined by PML_size
# For lossy maxwell's equations ∇XE = k[μ_r][S]H
# These variables setup the behavior of the PML layer
a_max = 3
p = 2 # Drop off rate
σ_prime_max = 1 # Maximum conductivity
η = 376.73031333108594
# There are two cases, 2D and 3D - assume 2D #FIXME
# Nx is number of columns, Ny is number of rows
sx = fill(1.0+0.0im,grid_size) # Fill solution space with 1s
sy = fill(1.0+0.0im,grid_size)
# Add xlow PML
for nx = 1:PML_size[1]
ax = 1 + a_max * (nx/PML_size[1])^p
σ_prime_x = σ_prime_max * (sin((pi*nx)/(2*PML_size[1])))^2
sx[PML_size[1]-nx+1,:] = fill(ax * (1 + 1.0im * η * σ_prime_x),(grid_size[2],1))
end
# Add xhigh PML
for nx = 1:PML_size[2]
ax = 1 + a_max * (nx/PML_size[2])^p
σ_prime_x = σ_prime_max * (sin((pi*nx)/(2*PML_size[2])))^2
sx[grid_size[1]-PML_size[2]+nx,:] = fill(ax * (1 + 1.0im * η * σ_prime_x),(grid_size[2],1))
end
# Add ylow PML
for ny = 1:PML_size[3]
ay = 1 + a_max * (ny/PML_size[3])^p
σ_prime_y = σ_prime_max * (sin((pi*ny)/(2*PML_size[3])))^2
sy[:,PML_size[3]-ny+1] = fill(ay * (1 + 1.0im * η * σ_prime_y),(1,grid_size[1]))
end
# Add yhigh PML
for ny = 1:PML_size[4]
ay = 1 + a_max * (ny/PML_size[4])^p
σ_prime_y = σ_prime_max * (sin((pi*ny)/(2*PML_size[4])))^2
sy[:,grid_size[2]-PML_size[4]+ny] = fill(ay * (1 + 1.0im * η * σ_prime_y),(1,grid_size[1]))
end
return sx,sy
end
```

## Implementation and a working 2D Example

To finally put this all together, we start by defining a device to actually solve. In this example, we will be looking at a binary diffraction grating designed for 28 GHz. This is a device that reflects differently depending of the frequency of incident light at a given angle.

### Drawing the BDG

I wrote a simple function that take a “resolution” parameter and a definition of the BDG to assign material properties to a grid. For this, we are using a plastic with an effective permittivity of 10. My function also returns several important pieces of information about the actual resolution of the grid as well as where to place the source that we will define later.

```
ϵ_r_2X_Grid,μ_r_2X_Grid,NPML,Nx2,Ny2,RES,Q_Limit = BDG(Resolution,PML_size,freq_sweep[2])
NGRID = (floor(Int64,Nx2/2),floor(Int64,Ny2/2))
```

Visualizing this shows we constructed it correctly.

There are several important things to note about this problem. The left and right sides are given periodic boundary condition as to emulate a device that is infinitely periodic in x. Because this is a 2D simulation, we make the approximation that the solved fields are infinitely extruded in z. We add spacer regions above and beneath the device to allow for evanescent fields to die out. It is recommended that these are about one wavelength or so. Finally, I added enough room for 20 cells of PML, a decent amount to minimize the amount of reflection while not significantly increasing the size of the solution space.

### Incorporating the PML

We can use the function from before to calculate the PML. Visualizing the imaginary component, we can see that the loss increases as we approach the edge.

The next step is to incorporate this PML into into our material grid. We can simply multiply the **S **matrix into the material matrices as before, or just multiply and divide the corresponding s entries as follows

```
μ_r_x = μ_r_2X_Grid ./ sx .* sy
μ_r_y = μ_r_2X_Grid .* sx ./ sy
μ_r_z = μ_r_2X_Grid .* sx .* sy
ϵ_r_x = ϵ_r_2X_Grid ./ sx .* sy
ϵ_r_y = ϵ_r_2X_Grid .* sx ./ sy
ϵ_r_z = ϵ_r_2X_Grid .* sx .* sy
```

### Setting up Ax=b

Combining everything we have solved for so far, we now need to construct our **A** matrix. We calculate the derivative matrices as before

`DEX,DEY,DHX,DHY = yee_grid_derivative(NGRID,k₀*RES, thisBC, k_inc/k₀) # Normalize k terms`

We make our material matrices as diagonal matrices to match the dimensionality of the derivative matrices. In this case, because it is only a diagonal, we store these as sparse matrices.

```
ϵ_r_x = spdiagm(0 => ϵ_r_x[:])
ϵ_r_y = spdiagm(0 => ϵ_r_y[:])
ϵ_r_z = spdiagm(0 => ϵ_r_z[:])
μ_r_z = spdiagm(0 => μ_r_z[:])
μ_r_x = spdiagm(0 => μ_r_x[:])
μ_r_y = spdiagm(0 => μ_r_y[:])
```

We can then solve for **A** by incorporating these derivative matrices with the material matrices as defined by the mode of the solution we want to solve for.

```
if Polarization == H
A = DHX/μ_r_y*DEX + DHY/μ_r_x*DEY + ϵ_r_z
elseif Polarization == E
A = DEX/ϵ_r_y*DHX + DEY/ϵ_r_x*DHY + μ_r_z
end
```

Next we need **b**. As the trivial solution to **A**x=**b** if b = 0 is just x = 0, we need a better choice of **b**. Obviously, we want to illuminate our device with a plane wave of some sort. We can simulate a plane wave source by first creating a plane wave in the entire solution space. This solution has a skew excitation at 15 degrees.

`F_Src = [exp(1.0im*-(k_inc[1]*i + k_inc[2]*j)) for i = 1:NGRID[1], j = 1:NGRID[2]]`

The next part is kinda tricky. Dr, Raymond Rumpf who helped me with this project developed a method of adding an excitation to the solution space while still satisfying Maxwell’s equations. He called it a total field/scattered field problem [3]. We first set a mask for where we want to seperate total field from scattered field. This is called a **Q** matrix. It is generally a good idea to place this in the free space between the PML and the device.

By solving the following, we can generate an excitation **b** that satisfies Maxwell’s equations. This is true because it generates a **b** that exists only at the interface between the total field and the scattered field. A more detailed summary of this formulation is in his paper.

\begin{equation}

b = (QA – AQ)*f

\end{equation}

If there was no device, there would simply by the source excitation beneath this boundary and zeros above.

### Solvers

Finally, we have an **A**x=**b** to solve! The problem though is that the matrices we made, if they were not sparse, are over 900 million entries. Any significant amount of resolution would require enormous amounts of RAM to solve. The solutions presented here are solved directly, although an iterative solver could be used.

Last but not least, we ask Julia to solve **A**x=**b**. We have to call Array(A) to convert it to a full array as we previously manipulated it as a sparse matrix.

`f = Array(A)\b`

### Post-Processing

I did not get the chance to perform any calculations of the solved results, but the real part of electric field is as expected. Look! It’s beautiful!

## Future Work

### GPU Implementation

One of the interesting things about the finite difference frequency domain problem, is that it can be readily parallelized. Normal CPUs have 4, maybe 8 cores, but graphics cards have hundreds, if not thousands of cores. These cores, however, are made for single instructions but multiple data, SIMD. This is where an iterative solver would really help. We can iterative on several cores at once and converge on a result faster. There are lots of places where the math presented could be sped up with GPUs.

### 3D Grid and Geometry Voxelization

All of the work presented this far is a 2D simplification of the FDFD problem. This situation is perfectly suited for devices like the presented binary diffraction grating, but the math works in 3D of course. I originally wanted to simulate a volumetric lens antenna in 3D and I wrote a voxelization script for the purpose of parsing 3D geometry into a Yee Grid. That code can be found on the Github of this project.

## Final Remarks

All in all, I am very satisfied with the performance of this solver. For 900 million entries, the Julia script solved the problem in 49.9 seconds! Not only that, but the translation from the theory to Julia didn’t depend on any crazy syntax. I am excited to see where Julia is going and I hope to expand this code to make use of the most features as possible.

## Bibliography

The code for this project can be viewed, edited, forked, and contributed on at my Github.

Julia Joins Petaflop Club

https://github.com/jeff-regier/Celeste.jl

[1] K. S. Yee, “Numerical solution of the initial boundary value problems involving Maxwell’s equations in isotropic

media,” IEEE Trans. Microwave Theory and Techniques, vol. 44, pp. 61–69, 1966.

[2] S. Zachary, D. Kingsland, R. Lee, J. Lee, “A Perfectly Matched Anisotropic Absorber for Use as an

Absorbing Boundary Condition,” IEEE Trans. on Ant. and Prop., Vol. 43, No. 12, pp 1460–1463, 1995.

[3] R. C. Rumpf, “Simple implementation of arbitrarily shaped total-field/scattered-field regions in finite-difference frequency-domain,” Progress in Electromagnetics Research B, vol. 36, pp. 221-248, 2012.

## Special Thanks

A quick special thanks to Dr. Raymond Rumpf from the University of Texas El Paso for helping me understand the FDFD method. He has provided slides and lecture notes from his website as well as support via email on this project. Without him, I would have never been able to get this working!