Here I discuss how to generalize hardmath's (excellent) answer as follows:

- $k$ rows and $k$ columns are removed instead of just one.
- The rows and columns may be at any location in the matrix (not necessarily the front, and not necessarily in a contiguous block)

I came across this question because I needed to solve this more general problem for a numerical method I am working on. Conceptually, all the key ideas are in hardmath's answer, but I found it still took a bit of work to do the generalization to a level of detail where I could implement it on a computer efficiently. I am posting my findings here to save future people the trouble. I also include python/numpy code which anyone may use as they see fit.

Let $\mathcal{R}^g$ be an index set of "good" rows of $A$, and let $\mathcal{R}^b$ be the complementary index set of "bad" rows of $A$, so that $\mathcal{R}^g \cup \mathcal{R}^b = \{1,2,\dots,N\}$, where $A$ is an $N \times N$ matrix.

Likewise, let $\mathcal{C}^g$ be an index set of "good" columns of $A$, and $\mathcal{C}^b$ be the complementary set of "bad" columns of $B$.

We seek to solve the linear system
$$A\left(\mathcal{R}^g,\mathcal{C}^g\right) ~x = b, \qquad (1)$$
where $A\left(\mathcal{R}^g,\mathcal{C}^g\right)$ denotes the submatrix of $A$ consisting of the "good" rows and columns. (The same notation as how you select a submatrix in Matlab)

For example, if $\mathcal{R}^g=(1,3)$, $\mathcal{C}^g=(5,4)$, and $N=5$, then the matrix in equation (1) becomes
$$A\left(\mathcal{R}^g,\mathcal{C}^g\right) = A\left((1,3),(5,4)\right) = \begin{bmatrix}
A_{15} & A_{14} \\
A_{35} & A_{34}
\end{bmatrix},
$$
and the complementary "bad" index sets are $\mathcal{R}^b=(2,4,5)$ and $\mathcal{C}^b=(1,2,3)$. (the ordering of the bad index sets is not important so long as one keeps it consistent)

Now consider the matrix $\widetilde{A}$ which is the same as $A$, except matrix entries corresponding to interactions between "good" and "bad" index sets are set to zero. I.e.,
\begin{align}
\widetilde{A}\left(\mathcal{R}^g,\mathcal{C}^g\right) &= A\left(\mathcal{R}^g,\mathcal{C}^g\right) \\
\widetilde{A}\left(\mathcal{R}^b,\mathcal{C}^b\right) &= A\left(\mathcal{R}^b,\mathcal{C}^b\right) \\
\widetilde{A}\left(\mathcal{R}^g,\mathcal{C}^b\right) &= 0 \\
\widetilde{A}\left(\mathcal{R}^b,\mathcal{C}^g\right) &= 0.
\end{align}
Also, let $\widetilde{b}$ be the vector with $\widetilde{b}(\mathcal{R}^g)=b$, and $\widetilde{b}(\mathcal{R}^b)=0$, where $\widetilde{b}(\mathcal{R}^g)$ denotes the entries of $\widetilde{b}$ corresponding to indices in $\mathcal{R}^g$, and likewise for $\widetilde{b}(\mathcal{R}^b)$.

Since these cross interactions in $\widetilde{A}$ are zero, if we solve
$$\widetilde{A} \widetilde{x} = \widetilde{b}, \qquad (2)$$
then
$$x\left(\mathcal{C}^g\right) = x.$$
In other words, the solution of (1) is found by extracting the $\mathcal{C}^g$ entries from the solution of (2).

Now, like in hardmath's answer, $\widetilde{A}$ can be written as a low rank update of $A$ in the form
$$\widetilde{A} = A - UV,$$
and system (2) can be solved via the Woodbury formula:
\begin{align}
\widetilde{x} = \widetilde{A}^{-1}\widetilde{b} &= \left(A - UV\right)^{-1}\widetilde{b} \\
&= A^{-1}\widetilde{b} + A^{-1}U\left(I_{2k \times 2k} - VA^{-1}U\right)^{-1}VA^{-1}\widetilde{b}.
\end{align}
But here $U$ is a $N \times 2k$ matrix, $V$ is a $2k \times N$ matrix, and $I_{2k \times 2k}$ is the $2k \times 2k$ identity matrix, where $k$ is the number of "bad" rows/columns (size of $\mathcal{R}^b$ and $\mathcal{C}^b$).

The matrices $U$ and $V$ are given as follows:
\begin{align}
U\left(\mathcal{R}^g, (1,\dots,k)\right) &= 0 \\
U\left(\mathcal{R}^b, (1,\dots,k)\right) &= I_{k\times k} \\
U\left(\mathcal{R}^g, (k+1,\dots,2k)\right) &= A\left(\mathcal{R}^g, \mathcal{C}^b\right) \\
U\left(\mathcal{R}^b, (k+1,\dots,2k)\right) &= 0
\end{align}
\begin{align}
V\left((1,\dots,k), \mathcal{C}^g\right) &= 0 \\
V\left((1,\dots,k), \mathcal{C}^b\right) &= A\left(\mathcal{R}^b, \mathcal{C}^g\right) \\
V\left((k+1,\dots,2k), \mathcal{C}^g\right) &= I_{k\times k} \\
V\left((k+1,\dots,2k), \mathcal{C}^b\right) &= 0.
\end{align}

Here is some python/numpy code demonstrating how to do this.

First, building the matrices $U$ and $V$ and verifying that they are correct:

```
import numpy as np
N = 102
A = np.random.randn(N, N)
bad_rows = [2, 1, 37, 15]
bad_cols = [80, 60, 40, 55]
k = len(bad_rows)
good_rows = np.setdiff1d(np.arange(A.shape[0]), bad_rows)
good_cols = np.setdiff1d(np.arange(A.shape[1]), bad_cols)
A_tilde_true = A.copy()
A_tilde_true[:, bad_cols] = 0.0
A_tilde_true[bad_rows, :] = 0.0
A_tilde_true[np.ix_(bad_rows, bad_cols)] = A[np.ix_(bad_rows, bad_cols)]
U = np.zeros((A.shape[0], 2*k))
U[bad_rows, :k] = np.eye(k)
U[good_rows, k:] = A[np.ix_(good_rows, bad_cols)]
V = np.zeros((2*k, A.shape[1]))
V[:k, good_cols] = A[np.ix_(bad_rows, good_cols)]
V[k:, bad_cols] = np.eye(k)
A_tilde = A - np.dot(U, V)
err_UV = np.linalg.norm(A_tilde_true - A_tilde) / np.linalg.norm(A_tilde_true)
print('err_UV=', err_UV)
```

which outputs: "err_UV= 0.0"

And now using $U$ and $V$ with the Woodbury formula to solve a linear system with the submatrix:

```
b = np.random.randn(N-k)
x_true = np.linalg.solve(A[np.ix_(good_rows, good_cols)], b)
Z = np.linalg.solve(A, U) # Z = inv(A)*U
C = np.eye(2*k) - np.dot(V, Z) # C = I - V*inv(A)*U
b_tilde = np.zeros(N)
b_tilde[good_rows] = b
x_tilde = np.linalg.solve(A_tilde, b_tilde) # x0 = inv(A)*b
x_tilde += np.dot(Z, np.linalg.solve(C, np.dot(V, x_tilde))) # dx = inv(A)*U*inv(C)*V*x0
x = x_tilde[good_cols]
err_woodbury = np.linalg.norm(x - x_true) / np.linalg.norm(x_true)
print('err_woodbury=', err_woodbury)
```

which outputs "err_woodbury= 9.049341577063206e-14" (i.e., zero, up to floating point rounding errors).