A common rule of thumb is 80% code coverage. But I think the flaw with that metric is that it treats all areas of the code base with equal importance. For example, I would say that code for CRUD operations in the admin section (for example) is less important than the code that accepts customer payment and issues an insurance policy. In this case, it actually would matter a great deal which 20% of the code is not tested.
The other flaw with the code coverage number is that for a line to be considered as “covered” it only requires that at least one test hits the line regardless of the potential paths through the code near the line or the possible input variations, or even if it is a test that is relevant to the way the method is being used in the context of the application.
So, while I would not completely dismiss the code coverage number, I think the more relevant way to answer the question is in the context of the user stories and the BDD style tests that fall out from them. That is, do the tests accurately describe the requirements for the application (typically using the when a user does x then y should happen/be true etc)?
All projects have some kind of time constraint (from either an actual time deadline, or a maximum number of hours the customer will pay for) that prevents us from testing everything as thoroughly as we would like. Many projects also have too many input options to make testing every combination feasible.
Any code where money is changing hands – including calculating correct amounts (e.g. accepting credit card payments)
Parts of an application that have variable functionality based on user roles (e.g. make sure those without admin rights can’t do admin stuff)
Parts of the application that involve a lot of different business rules and/or user stories (this is more subjective, but I think between the customer and the lead developer it can be determined)
Other failures that have a high cost to resolve