The term Secure Data is an extremely broad subject covering all aspects of data security from prevention of loss to ensuring that data integrity is maintained, that the data is correctly backed up and available for immediate restoration.
All organisations today collect and process some form of data. It’s imperative that business critical and personal data be secure from loss and theft. Data Loss could occur from:
- Theft by cyber criminals.
- Software data corruption.
- Corruption from ransomware.
- Hardware failure.
- Malware or viruses.
- Theft or loss of hardware.
- Fire, flood or power issues.
- Natural disasters.
- Accidental file deletion.
- Sabotage by a disgruntled employee or other insider.
Prevention is always better than a cure. Especially in the case of data theft, once your network has been breached and data stolen no number of backups can fix the problem.
Every organisation is different, but there are several steps an organisation can take to prevent data loss.
- Recognizing what data requires greater levels of protection and putting a plan in place to ensure this data is secure.
- Storing this data in a secure location on a secure server. It is also advisable to have the data encrypted.
- Install servers in a climate-controlled and physically secure environment, this will prolong hardware life and reduce unauthorised access to the servers.
- Connect your server to a UPS, prevents loss of power and protects from power surges.
- Limiting user logins to minimum functionality required for everyone to perform their job, don’t just give everybody full admin rights.
- Have a robust password policy in place with strong passwords that are changed frequently.
- It is essential to train staff in the importance of data security and how to recognize a potential cyber-attack. Example: What does a phishing email look like.
- Makes use of up to date anti-virus and email security.
- Make sure your computer systems are protected by a secure firewall.
- Make sure all computers in use are up to date with all the latest patches loaded.
- Set procedures should be strictly adhered to for example do not act on suspicious emailed instructions without verifying first, even if it appears to come from the CEO.
- Keep your “Attack Surface” as small as possible, if staff do carry portable devices out of the office with them. i.e. Laptops and smartphones, it is advisable to have them password protected, encrypted and able to be remote wiped in the case of loss or theft.
No amount of preventative measures can stop hardware eventually failing, particularly Hard drives, all hard drives will fail in time and data will be lost.
Expect it – Plan for it:
The only way to ensure you do not lose too much data is to ensure you have an effective backup plan in place.
What makes for a good backup strategy?
Decide how often backups are necessary depending on the nature of your data and how much data your organisation can “afford” to lose, your RPO (Recovery Point Objective.). If your organisation only backs up once a week you could lose an entire week’s worth of data in the event of a disaster, for many organisation’s this would be unacceptable and more regular backups may be required.
Having a single backup stored on-site may be convenient, but it’s not enough. In the event of a fire or office theft you may lose your original data AND your backup.
A good backup strategy follows the 3-2-1 principle. You need to have 3 copies of your data, the original, and 2 other copies on different media, with 1 copy being stored onsite and 1 copy being stored offsite which could be in the cloud.
The reason for 3 copies is redundancy.
Having the 2 backups on different media e.g. hard drive and DAT Drive, ensures different failure times as the two technologies offer different lifespans. This enables you to replace the faulty media and restore the backup from the surviving media.
Keeping 1 backup onsite allows for quick and convenient restoration in the event of data loss. Keeping 1 backup offsite in a different geographical location is an insurance policy against a catastrophic event onsite, fire or flood, that could destroy all data storage devices. It’s a good idea to back up to a cloud server that is in a different geographical area, maybe even a different continent to cover for threats that may cover a large geographical area, severe weather, blackouts etc.
Simple Retention Policy.
A simple retention policy is intended for short time archiving where you decide how many restore points you wish to retain and a copy interval, for example you choose 7 restore points with a copy interval of 1 day. A simple retention policy performs one full back up once a week and an incremental backup for the following 6 days. The media is then overwritten.
FIFO- First In First Out.
Like the Simple Retention Policy, but instead of incremental backups a full backup is made each time overwriting the oldest existing backup. Backups are held for a period corresponding to the number of available media, if you have 7 media then backups are held for 7 days. 14 Media equals 14 days.
A weakness of the two above examples is the potential for lost data. It could happen that a file is corrupted, or accidently deleted. The error is not picked up during the retention period and all the older backups have been overwritten, that file is now lost.
GFS Retention Policy.
This is a more thorough strategy than the Simple and FIFO methods, data is backed up over a longer period reducing the chance of lost data.
Grandfather-father-son backup is a common rotation scheme for backup media, in which there are three or more backup cycles, such as daily, weekly and monthly.
The daily backups are rotated daily using a FIFO system as above. The weekly backups are similarly rotated on a weekly basis, and the monthly backup on a monthly basis.
In addition, quarterly, half-yearly, and/or annual backups could also be separately retained, it’s a good idea to keep these offsite.
If it hasn’t been tested and confirmed, it doesn’t exist.
Irrespective of which backup retention policy you choose it’s vital that the backups be tested and confirmed from time to time to ensure that the backups are working and can be restored.
Having a good backup policy in place is the first step to securing your data, having a realistic recovery plan is the next step.
A disaster recovery plan is a balance between the organisations urgency to get back up and running versus the cost of desired redundancy levels.
In an ideal world we would have 100% redundancy with a second server mirroring the operational server which would immediately take over in the event of the operational server failing restricting data loss to just a few seconds.
While this level of redundancy is technically possible it’s not within the budget or requirements of all organisation’s.
For some organisation’s a recovery period of a few days may be acceptable as any missing or lost data can simply be re-captured manually on the new system once restored, but for other organisation’s a few days data loss could be catastrophic.
It all depends on how critical and time dependent your data is.