I tried to deploy code to an ElasticBeanstalk environment. Every-time I try to deploy this branch to an environment EB kills all instances, ELB, RDS, etc and tries to rebuild but fails. This leaves the environment in a bad state because it deletes the RDS but does not delete the security groups or ENI. When I try to delete the security groups manually it fails saying there are dependant objects.
I traced it back to the network interface but when I try to detach it (even force detach) I get an error that I do not have permission. This ENI should have been removed with the RDS instance but it was not. Now I cannot get rid of the environment at all and cannot rebuild it.
I am not sure why this application would cause the environment to attempt to re-create everything upon every deployment as the EC2 instances go away and then when they load back up they are added to the ELB however the ELB cannot do the health checks so they are constantly put Out of Service and the environment is in a dead state. It would be nice if I could somehow see the logs as to what is causing the environments to crash with this application.
Having ElasticBeanstalk delete all instances including RDS is not acceptable for a deployment because we constantly have to re-seed this, not to mention if this were ever deployed to production it would wipe all production data and we cannot have that.
Is there a way to see what is going on during a deployment and why this may be happening?