One-bit compressed sensing has found broad applications. Due to the constraint on the unit sphere, the classic $l_{1}$ minimization frequently returns a signal which is not sparse enough. In this paper, $l_{1}-l_{q} (1 \lt q \leq 2 )$ nonconvex minimization method is developed for one-bit compressed sensing. We demonstrate that $l_{1}-l_{q}$ minimization does return a much sparser signal than $l_{1}$ minimization. Furthermore, we establish $l_{1}-l_{q}$ optimization iterative algorithm inspired by difference-of-convex algorithm (DCA), and prove the convergence of the new algorithm. Theoretical proofs show that $l_{1}-l_{q}$ minimization method performs better when $q$ is closer to 1, and our algorithm converges in finite steps in normal cases. Numerical experiments display the advantage of this new algorithm compared to the existing ones.