Page 1 of 1
Forum

Welcome to the Tweaking4All community forums!
When participating, please keep the Forum Rules in mind!

Topics for particular software or systems: Start your topic link with the name of the application or system.
For example “MacOS X – Your question“, or “MS Word – Your Tip or Trick“.

Please note that switching to another language when reading a post will not bring you to the same post, in Dutch, as there is no translation for that post!



Share:
Notifications
Clear all

Dan Ran's Question/Comment ...

2 Posts
1 Users
0 Likes
739 Views
 Hans
(@hans)
Famed Member Admin
Joined: 9 years ago
Posts: 2282
Topic starter  

Due to the size of the reply, I had to move this comment to the forum ... 😊 

Hans:

Hi Dan Ran!

 

Dan Reply:

Well hello My friend! I apologize for the late reply, but I kind of forgot about this until now, since I am back looking for a solution to my problem still. Your response is honestly the most awesome reply that I have ever had from a developer, and TBH, it has granted me hope and pure happiness to know that you are not only a good developer, but really friendly and put such great effort into your replies. As I stated with my first post, you are kind of my last hope here in solving my issue, and my issue is right up your alley. So again, thank you so much for your inspirational reply, as I have been digging for a recommended solution for ages now.

 

Hans:

That is awesome! Haha … I’m always surprised how folks use my application, since it started out as just a “dd” script frontend for Raspberry Pi purposes haha.

 

Dan reply:

It is awesome! You have single handedly saved my server from apocolyptic fails many of time! I’m not a rich man by any means but if you have a donation box I would buy you a beer for sure!

 

Hans:

When using compression (ZIP etc), date and timestamps should remain unaffected. 

 

Dan reply:

What I meant by this, is (from my memory) when decompressing the zip file (Sometimes I make file adjustments to the config.txt in the boot directory before imaging to usb), the ISO file doesn’t share the same time stamps as the zip file (unless I have been looking at the wrong tag), but instead shows the timestamp of when it was decompressed. For me, I would like the ISO file to show the time stamp of when the ZIP file was created after extracting the iso. This helps me be sure I am flashing the right ISO after modification, when juggling around multiple backup ISO’s. 

 

Hans:

1) You should be able to restore ISO files with ApplePi-Baker as well … 

 

Dan reply:

I can’t remember exactly why I picked up the habbit of restoring with balena etcher instead of ApplePi-Baker, but knowing myself, it had to be for some sort of reason that I can’t remember. Maybe I had an error one too many times or something and decided to switch to Etcher. Just can’t remember on this one. I will go back to using ApplePi-Baker for restoration, and see if it triggers my memory of why I stopped using it for restoration in the first place.

 

Hans:

2) Shrinking errors out: Hmmm … 500Gb of space for a 32Gb USB stick should be more than enough indeed.

What was the error message?

I know macOS 12.x introduced some Access Violation errors, which would not be relevant for you situation.

 

Dan Reply:

So I realize now that I only got these errors when using AppliPi-Baker v2.2.9-beta. I have since reverted to 2.2.3, and I no longer get the errors. If you want me to reinstall the beta and post some error logs I can.

 

A while ago, using the beta version, I tried making a shrunken backup with ApplePi-Baker v2.2.9, in which it DID lead to an access violation error.

I chose to abort to eliminate risk of data corruption.

 

“ApplePi-Baker

Access violation.

Press OK to ignore and risk data corruption.

Press Abort to kill the program.”

 

Since this isn’t the first time I have had problems shrinking the partition in ApplePi-Baker, I have found a workaround that USUALLY (up until now with APBv2.2.9) worked for me.

What I would do is boot up gparted in linux, and shrink the raspberry pi 4 ext4 filesystem first, then boot up MacOS, and use AppliPi-Baker without the resize-option, and backup from there. Usually, this would work and give my a much smaller ISO file (BTW, I have never had any real problems with the creation of ISO FIles in APB, works great!), however with 2.2.9 I have now run into the problem of APB still creating an ISO file the same size as my hard drive, essentially, failing to read the shrunken ext4 partition like it used to do. 

 

UPDATE: I have reverted back to APBv2.2.3, and now, cloning the gparted pre-shrunken partition drive DOES work again. So it seems APBv2.2.9 might not be able to read partition sizes that are already shrunken from a different program. I have not tested auto-shrinking on v2.2.3 yet, but it clearly doesn’t work with v2.2.9, so you may need to consider this when releasing your next official update for APB.

 

Hans:

Questions:

 

Making an ISO file is relatively easy, however making a backup of a running server may come with some unexpected issues (since files are changing while the server is running).

In its simplest form you use “dd” to make a raw IMG file, and just rename it to “.ISO”. This is strictly speaking not an ISO 9660 file – however most operating systems will be able to mount them as a “disk”. 

 

Dan reply:

Yes, I did tinker with the DD Command on a live server. However, I found some obstacles. 

1) Since every hard drive capacity isn’t bit for bit the exact same, I noticed that I would only usually have space to reflash the image if the disk being flashed to was larger than the original disk. In other words, I’m having problems flashing a 128GB dd file onto a 128gb disk, if the disk it is being flashed to is not the EXACT same disk that the dd file was taken from in the first place.

2) Since I am using only 20Gb on the root partition of my 128GB drive (the live running server), I would like to somehow shrink the root partition of the live server down to 20GB after running the DD command, so then the disk image is not so big. Or, if possible, shrink the root partition on-the-fly during the DD command so I don’t necessarily need to even have 128GB of space on my backup drive just to run DD on a live server.

 

Hans:

This is what ApplePi-Baker does: it reads data raw from the drive (like “dd” does). This way the copy is a full copy of the disk no matter what is stored on the disk. So you could even make an image of some sort of exotic file system (shrink would not work of course) without any issues, since it is a byte-by-byte copy, ignoring anything like format/partitions/etc.

 

RSYNC is file based, so this would not be a raw copy, and ApplePi-Baker would not have a use for it.

 

Dan Reply:

Thank you for the explanation. This makes sense and helps my understanding.

 

Hans:

Just out of curiosity, what are you trying to accomplish?

 

Dan Reply:

I’m super glad you asked, but you might regret you asking, as I am going to unload on you a bit. I have posted questions all over the internet including stack exchange and other forums all with no real replys. I’m in desperate need of direction here. So since you asked… lol…

 

My main goal here is to accomplish a disaster recovery plan in case my server gets hacked, the hard drive fails somehow, I somehow screw up configuration files (not knowing what I screwed up or misconfigured) and need to revert the server to an earlier time, or some other flaw happens out of my control where I need an exact clone of an earlier version of the server.

 

The problem with most of the backup utillites I have found for the command line on Ubuntu, is that they only backup the users home folder and data, whereas I need the root directories to be backed up as well since I’m running a server. 

 

I have toyed with the idea of backing up the root directory of the server and every single directory on the root partition with rsync, but the problem is that I can’t easily restore everything, since restoration from a backup to the server using rsync again, would overwrite absolutely everything, including the rsync program itself. After key directories are overwritten, then the server would just crash and totally stop working as disaster happens.

 

In my ideal situation, what I really would like to do, is somehow, rsync both the root filesystem and boot partition dos filesystem on the raspberry pi, to form an ISO, or a disk image, that I could just flash to another hard disk when disaster happens, and boot up magically a perfectly working server again. An even more ideal situation, would be to accomplish the aformentioned task, but do incremental ISO backups, or at least, multiple timestamped backups with crontab or something.

 

So far, I have gotten zero responses. Essentially though, i’m pretty much trying to do what RPI-baker does, but on a live ubuntu server. LOL.

 

Hans:

Realtime backup to a disk/ISO that can be used to restore your server and keep it running?

 

Dan reply:

Exactly!

 

Hans:

Or a realtime copy so you can bring the ISO elsewhere and get a server running from scratch?

 

Dan reply: 

Nope none of that.

 

Hans:

Or should I ask: is this to keep things running when the USB stick fails? Or to copy your config to another machine?

 

Dan Reply:

Yes, when the usb stick fails.

 

Hans:

Just asking since software RAID-1 may be an option as well, which would make an image to another drive realtime.

 

Dan reply:

This is interesting. I am very unfamiliar with RAID, but have heard of it. I’m quite the lost puppy, but this sounds like this could be the thing for me? Please tell me more?

 

Hans:

Maybe this is helpful: Pinguy-os ISO Builder ?

 

Dan reply:

I will look into this link soon. Been swamped lately.

 

 

Dan:

Again, I just wanted to say thank you so so much for your time and help here. I’m sure you are a busy guy, and can’t tell you how much I appreciate your knowledge. Also, thanks for developing such a great app with ApplePiBaker. It has already saved me a million times when misconfiguring my server and not knowing what I did. I do manual backups almost every week incase something happens. So you are saving me infinite hours of troubles with this app! You da man!

 

Sincerely,

 

Dan Ran


   
ReplyQuote
 Hans
(@hans)
Famed Member Admin
Joined: 9 years ago
Posts: 2282
Topic starter  

Hi Dan Ran!

Again apologies for having to move your reply to the forum. 
It just got too long for the regular comments.

First off ... thank you for your kind words. Let me see if I can help ... 👍 

p.s. next post please list the macOS version and hardware (M1 vs Intel processor) - I'm sure you wrote it down somewhere - just didn't see it right away.

Dan reply:
It is awesome! You have single handedly saved my server from apocalyptic fails many of time! I’m not a rich man by any means but if you have a donation box I would buy you a beer for sure!

Nice!!! Cool to hear that. 👍 

Dan reply:
What I meant by this, is (from my memory) when decompressing the zip file, the ISO file doesn’t share the same time stamps as the zip file.

Well, actually, what happens is that ApplePi-Baker will stream the data from the disk through a compression library.
So the file date of the iso and zip should be the same (of course: I could be wrong).

Shrinking errors out: Hmmm … 500Gb of space for a 32Gb USB stick should be more than enough indeed.
What was the error message?

I know macOS 12.x introduced some Access Violation errors, which would not be relevant for you situation.

These are hard to debug for me when I cannot reproduce the issue 😞 

The biggest differences between 2.2.3 and 2.2.9 beta are related to the timing when APB talks to its helper tool.
But switching back will probably not reveal anything useful, unless you can look in Console (Applications -> Utilities -> Console -- set filter to APPLEPIBAKER, klik the start button, run APB and make it crash, then press the Pause button in Console) and see if there is anything revealing in there. Note though that a lot of messages will appear and most of them are not relevant. 

... workaround that USUALLY (up until now with APBv2.2.9) worked for me. Gparted in linux ...

Kind-a the procedure I used to use as well, and why I created the shrink/expand option.
I'd love to see that work instead of all these (good) steps. I'm sure you're love that as well 😉 

Yes, I did tinker with the DD Command on a live server. However, I found some obstacles. 128Gb disk.

This is another reason why I wanted the shrink/expand option. Most SD cards are not consistent in capacity even though they are sold as the same size. Another note is that it will be very rare that a user used every single byte on their SD card. Shrink should shrink your backup IMG to around 20 Gb, and expand it again to about 128Gb when restoring.

Goal:
My main goal here is to accomplish a disaster recovery plan in case my server gets hacked, the hard drive fails somehow, I somehow screw up configuration files (not knowing what I screwed up or misconfigured) and need to revert the server to an earlier time, or some other flaw happens out of my control where I need an exact clone of an earlier version of the server.

That can be a challenge indeed. I can see two steps here, which I'd ideally combine:

  1. Realtime backup with for example one or the other RAID variant, which will help with crashing harddisks/SSDs etc.
  2. Frequent Backup: Daily or weekly (or whatever frequency you like) full backups, which will help when your server gets hacked/screwed up/etc.

The downside of a real time backup is that you disaster potentially gets backed up up as well.
The downside of a daily or weekly backup is that you may lose some of the work in between.

Ideally you'd combine both options OR .... you separate data from OS+Applications.
So this way you have an imagine of your OS with the applications that you use, which makes a frequent backup sufficient. The only changes would be some minor config changes or application/system updates that can be restored quite easily if needed. Or ... you do a full backup when you updated the system, applications, or config.
The Data then is stored on a different disk or network share - which can be protected by RAID and frequent backups as well.

Note: RAID is not a backup replacement. It is just to make you storage more resilient to withstand errors etc.

To illustrate this with my own setup:

  1. I have one NAS with all my data on it. The data is basically the files I work with and the NAS runs RAID6.
  2. This NAS is nightly copied to a 2nd NAS (running RAID 5).
  3. The OS and Applications are on my PC, and frankly when that one crashes it won't be a problem - I'd just need to reinstall the OS and Applications. 

 

So bringing that to your setup;

  • Make a frequent backup of your OS + Applications (when you change something important/big - it could be good to disable RSYNC before making a backup)
  • Store data on your local disk and RSYNC that data to a network share or external USB drive

At least that is what I'd consider as an "easy" option ,...

A RAID setup would not be needed in this approach.

 

recovery would be easy:

  • Restore the OS + Applications to SD card (or USB stick) 
    Verify application updates and config changes in case you forgot those.
  • Make sure the RSYNC is disabled, to avoid old files to overwrite the files on your network share or USB stick.
  • Reconnect to the network share or USB disk and copy your data to the SD card (local disk).
    Make sure all works with local data.
  • If all works properly again: re-enable RSYNC.

 

Does this help?


   
ReplyQuote
Share: