Repair Data "Anomalies have been detected"

If I do a standard Repair Data, and at the end 4D says “Anomalies have been detected” (which I was expecting by the particular damage), does this mean that I need to run a second repair?, or should I be good to continue with this data file?

(I had a few records that were un-readable)

:idea: after a repair I always proceed to a new verification until no new errors rises

When I get such report (“Anomalies have been detected”), I don’t try to repair first BUT I do a Restore from my last backup.

And before each backup I launch a verification with the 4D command : VERIFY CURRENT DATA FILE

I use the data repair function at the very last end because this type of command can delete damaged records.

Maurice you do a retore of your last backup + integrating the log.
If not, there is no benefice to lost the work doing after the last
backup or losing some damaged records.

: Maurice INZIRILLO

And before each backup I launch a verification with the 4D command :
VERIFY CURRENT DATA FILE

Does that means that 4D don’t do this automatically ? :-?
I always thought that 4D verify the file before or during the backup.
What is the purpose to backup corrupted files ? :-o

: Manuel PIQUET

I always thought that 4D verify the file before or during the backup.
That’s our job. Unless you prefer slower backup…

4D can’t launch a process even after the backup to verify the backuped files ?

: Arnaud DE MONTARD
: Manuel PIQUET

I always thought that 4D verify the file before or during the
backup.
That’s our job. Unless you prefer slower backup…

And what are you doing after with your corrupted backup ??? :-?
Speed is good but if the result is faulty :roll:

This is why doing a verify Data first is a good idea.

Still our job: https://doc.4d.com/4Dv17R4/4D/17-R4/VERIFIER-FICHIER-DONNEES-OUVERT.301-4054749.fr.htmlVERIFY CURRENT DATA FILE>
Verify takes time and users have to wait.
Most of the time, I backup twice a day (13h, 19h, after the rush) but verify once a day.
I prefer divide and conquer.
BTW, I still don’t like that backup as a file, too long to extract.

: Maurice INZIRILLO

This is why doing a verify Data first is a good idea.
I like that. The database method is after backup, how do you trigger verify first? …

: Manuel PIQUET

And what are you doing after with your corrupted backup ??? :-?
the bk is corrupted because the source was:
what are you doing with your corrupted 4dd ??? :-?

:lol:

If the backup is a simple copy of a file, then I would prefer to stop the application and copy the file myself with the system, this will be more powerful. :frowning:

For knowing that you have a corrupted 4DD you have to do the check first. But during this check all access is stop for all the user. What I want is : when I made a backup verify that the file that I have backuped is not corrupted. After that, we can eventually deduct that the current file is eventually also corrupted !

During the time you check the backup the database is not stopped ! :idea:

: Arnaud DE MONTARD
: Maurice INZIRILLO

This is why doing a verify Data first is a good idea.
I like that. The database method is after backup, how do you trigger
verify first? …

On Backup Startup database method is a good candidate ! :wink:

Or you can create a stored process that will launch hour verify database time to time and you can have an alert to prevent the admin if the data is damaged. Up to the admin to decide to stop the current database and to restore the last backup and include the current log file.

: Maurice INZIRILLO

On Backup Startup database method is a good candidate ! :wink:
you won’t believe I never noticed that one.
where is the pillory, please?

: Manuel PIQUET

If the backup is a simple copy of a file.
you can’t say that, think about transactions - or try and pray.

Just to be clear you verify the current data file OK but like this you can’t be sure If the backup file is correct too !

Maurice,
Normally I would go to the last GOOD backup: restore it, and integrate journals forward.
However, I’m in a position with a large datafile (130GB) where we temporarily ran without a journal, and had to start a new journal after the last backup: so the above mechanism is not an option.

So now, the client is limping along on a damaged datafile until my off-line repair completes:then I’ll integrate journals since then.

: Manuel PIQUET

If the backup is a simple copy of a file, then I would prefer to
stop the application and copy the file myself with the system, this
will be more powerful. :frowning:

For knowing that you have a corrupted 4DD you have to do the check
first. But during this check all access is stop for all the user.
What I want is : when I made a backup verify that the file that I
have backuped is not corrupted. After that, we can eventually deduct
that the current file is eventually also corrupted !

During the time you check the backup the database is not stopped !
:idea:

The database is stopped during the backup for writing but not for reading. The same during the verification.

You can start the verification at times of reduced activity.

: Manuel PIQUET

Just to be clear you verify the current data file OK but like this
you can’t be sure If the backup file is correct too !

Backup is providing many options helping to guarantee that the backup file is as safe as possible (interlacing rate, redundancy rate). But if your storage system is really broken (hard disk sector, catalog etc. SSD out ) that’s another story that need to be addressed by the IT.

Iif you want to check if the backup is OK, you can launch a restore of your last backup using another instance of 4D for example and when the restore is done, you can verify your data :slight_smile:

Or you can also put in place a 4D Server as a mirror of your main 4D Server (Prod) if you want to have a 4D Server really available 24/7. On the mirror you can launch a verify data, etc. etc.

Without talking about physical hardware problems, is it not possible simply to launch a process after the backup to verify the backuped file ? (test the file structure but also is coherence with the structure)

What you describe is doable but very disproportioned for just to be sure that the backup was a good one…

It can be a new option, but must be accessible simply, not only by programming. Administrator (NOT developper) must be capable also to use this option.