Sep 15, 2022
With only the provided info to guide the answer, it should be something like: The security posture is not likely to improve. When a new user arrives at AWS/GCP, these things are true: https://aws.amazon.com/compliance/iso-27001-faqs https://cloud.google.com/security/compliance/iso-27001
The only thing that can lessen the effects are our configuration decisions post-signup; this will generally be true wherever you go. So, you're in as good of a position as you're likely to ever be in. In both cases: There is the idea of a "vulnerability mitigation"; meaning: perhaps a security control that is violated but the mitigating factor (whatever that happens to be) makes it a non-issue. It then becomes an exception. I used to handle things like this (POA&M) for the DoD in a past life; https://www.dhs.gov/sites/default/files/publications/4300A-Handbook-Attachment-H-POAM-Guide.pdf (search for "mitigation")
With services you have no control over, like elastic IPs, you request one and you get it: this mechanism will always be ISO-27001-compliant. With services where there is a "shared responsibility", the operator can screw this up and take the service (we'll say managed-node-group) and introduce a security flaw; perhaps that config goes out with an old Ubuntu build for the workers with no security controls in place - it happens. In this case, this is user-error on the part of the operator. They simply need to follow instructions and use the amazon-linux nodes, that are configured by AWS/Red Hat to be secure = back in compliance.
If the move from one cloud to another is just to check certification boxes then the issue seems to be business - not technology. If security is taken seriously (via design) your company can be secure at any cloud.
So, in effect, the issue won't really be addressed - it will only get moved to the new cloud. That stuff aside...
If you're managing high volumes of data, then Kafka is the answer to 9 out of 10 data problems. You can ingest as much data as your nodes can handle; it's all in memory so those are the types of nodes that should be selected. After that, Kafka can 1) accept the inbound data 2) process it with its ETL facility 3) pass that processed data to postgres for long-term storage It can be (roughly) that simple.
In fact, one of the guys I work with pulls data from (example) graph db, into kafka, transforms it and stores the resultant dataset in a relational-db. It can pull data from any data source, transform it and publish it to any subscriber (another data source). In this way it could become the sun in the middle of your data solar system. And runs brilliantly on Kubernetes: https://strimzi.io
Put in the simplest terms: source:any data-type -> ETL -> destination:any other data type.
I've never been a fan of storing data on Kubernetes. I know they say you can do it - it will never be as stable as something like RDS/Cloud_SQL - it can't be - because it's managed by humans with lax discipline. RDS, for example, doesn't get touched unless someone triggers a highly-polished upgrade program that has likely gone through years of development. Nothing you can do on Kubernetes will replace that time investment. Just use the Cloud's data storage mechanisms and you'll be able to sleep at night. Else, one false move by an operator and you're restoring from backups, with endless testing - and you'll never really know what needle was lost from the haystack.