EDIT: I have included all of the MATLAB code here: https://gitlab.com/TheRTGuy/point-cloud-support-function-minimization. There I also have other evaluations.

Given the assumption that $f$ is always nonnegative, then this implies that the convex hull of $C=\{c_1,\ldots,c_k\}$ contains the origin. Were this not the case, then $f$ is always negative, and the origin is separable from the point cloud. In this case, minimizing $f$ just amounts to finding the supporting hyperplane that maximizes the distance from the origin, a problem that is solved easily using convex optimization (e.g. solving the dual of the primal problem).

The problem, as some pointed out in the comments, boils down to enumerating the facets of the convex hull of $C$, and finding the one closest to the origin. This is a very hard problem, see https://www.cs.mcgill.ca/~fukuda/soft/polyfaq/node22.html#polytope:Vredundancy, since the number of facets is exp. in $n$ and $k$. So, forget about an "efficient" exact solution.

However, in my research I needed a way to compute a tight overapproximation of the convex hull of a point cloud. Here is a simplified adaptation, of the approach I used, to OP's problem:

The approach:

- Generate vertices of a regular simplex, $\{h_1,\ldots,h_{n+1}\}$, and compute $$b_0 = \begin{pmatrix}\max_{c\in C} h_1^\top c \\ \vdots \\ \max_{c\in C}h_{n+1}^\top c\end{pmatrix}.$$
- Form the new simplex $S_0 = \{x\in\mathbb{R}^n~\vert~ H_0x \leq b_0\} = \operatorname{Conv}(\{s^1_0,\ldots,s^{n+1}_0\})$, where $$H_0 = \begin{pmatrix}h_1^\top\\\vdots \\h_{n+1}^\top\end{pmatrix},$$
and $s^i_0$ is the $i$-th vertex of the simplex. This simplex tightly bounds the set $C$.
- Optionally, perturb the simplex by a random rotation.
- Set $i=1$, and $f_0 = \min_j [b_0]_j$, where $[\cdot]_j$ is the $j$-th component of a vector.
- For each vertex $s^j_{i-1}$ solve the LP
$$
x_j=\arg\min_x ~ g(x)= x^\top s^j_{i-1} \quad \\s.t. \\\quad (c_l - s^j_{i-1})^\top x \leq -1, ~~~\forall l\in\{1,\ldots,k\}, \\
\quad (s^l_{i-1}- s^j_{i-1})^\top x \leq -1, ~~~\forall l\in \{1,\ldots,n+1\}\setminus\{j\},\\
\quad x^\top s^j_{i-1} \geq -1.
$$
The LP basically finds a supporting hyperplane of $C$ that separates it from $s^j_{i-1}$. This plane is often a facet of $\operatorname{Conv}(C)$.
- Set
$$
H_i = \begin{pmatrix}\frac{x_1^\top}{\lVert x_1\rVert}\\\vdots\\\frac{x_{n+1}^\top}{\lVert x_1\rVert}\end{pmatrix},\qquad b_i = \begin{pmatrix}\frac{x_1^\top s^1_{i-1}-1}{\lVert x\rVert} \\ \vdots \\ \frac{x_{n+1}^\top s^{n+1}_{i-1}-1}{\lVert x\rVert} \end{pmatrix}.
$$
This forms a new simplex $S_i = \{x\in\mathbb{R}^n~\vert~ H_ix \leq b_i\} = \operatorname{Conv}(\{s^1_i,\ldots,s^{n+1}_i\})$.
- Set $f_i = \min\{\min_j[b_i]_j , f_{i-1}\}$, $i:=i+1$, and repeat steps 5-7 for $N$ iterations, or until $f_i = f_{i-1}$ for some consecutive iterations.
- Set set the optimal value to $f^* = f_i$.

Evaluation:
The implementation I have is for MATLAB.

- I evaluated the approach for $n=2,\ldots,20$, with $k=1000$. For each $n$ I ran the algorithm 3 times on a uniformly generated set $C$, to which I also apply a random rotation, and time each run. The number of iterations is fixed to $N=5$. In case you are curious, this is the set of MATLAB commands executed to generate the point cloud:

```
%Randomly rotated uniform
C = 5*rand(n,N)-2.5;
%Indexing pattern for off-diagonal entries of matrix
idx = nchoosek(1:n,2);
dU = zeros(n);
dth = pi*(2*rand(n*(n-1)/2,1) - 1); %Uniform
dU(sub2ind([n,n],idx(:,1),idx(:,2))) = dth;
dU(sub2ind([n,n],idx(:,2),idx(:,1))) = -dth;
C = expm(dU) * C;
```

- To evaluate how good the solution is, for each run I generate 1000000 points, $\{x_k\}$, normally distributed on the unit sphere in $\mathbb{R}^n$, and evaluate $f$ at each point. Then, I compute $d = \min_k\{f(x_k)\} - f^*$ and plot it. Positive means that the solution is good, negative means the solution is bad. Of course, if positive, this does not mean the solution is the best. Furthermore, the higher the dimension, the lower the quality of this evaluation. Ideally, we want to compare it with another algorithm, which I don't have.

Results:

The first plot shows the execution times vs. $n$. The second plot shows the difference $d$ vs $n$. The final plot is just an arbitrary run for $n=20$, which shows $f(x_k)$ and $f^*$.

$n=20$" />

Conclusion:

On first glance, the approach seems to perform very well. Only a small number of iteratios is necessary to get a good suboptimal solution. Also, solving LPs is very efficient, even for large $n$ and $k$. However, there is no rigorous tight bound for optimality yet. More rigorous analysis is needed. Furthermore, evaluating the approach with normally distributed points on the unit sphere is obviously not the best way, but at least it gives a sense of optimality. This is clearly seen from the second plot where the gap is increasing. If the OP is interested in further evaluation, then I would be willing to share some code.

The intuition is simple: we try to "squeeze" the points by simplices as much as possible, by using the vertices of previous simplices as anchors. The facets of the simplices often coincide with the faces of $\operatorname{Conv}(\{c_i\})$.