ACFS and OeBS 12.2

I know it’s an unsupported configuration (and what’s a supported configuration then?) but if you use ACFS as a shared filesystem for OeBS 12.2 you might face with some strange problem by using new adop functionality.

I run adop phase=fs_clone and get the error on my system:

ERROR while running FSCloneApply...
ERRORMSG: /opt/acfs/R12.2/fs1/EBSapps/comn/adopclone_appltest01e/bin/adclone.pl did not go through successfully.

[APPLY PHASE]
AutoConfig could not successfully execute the following scripts:
ouicli.pl INSTE8_APPLY 1

ERROR: RC-50013: Fatal: Instantiate driver did not complete successfully.
/opt/acfs/R12.2/fs2/EBSapps/10.1.2/appsutil/driver/regclone.drv
ERROR: RC-50004: Fatal: Error occurred in ApplyAppsTechStack:
RC-50013: Fatal: Failed to instantiate driver /opt/acfs/R12.2/fs2/EBSapps/10.1.2/appsutil/driver/regclone.drv

The truly error message could be found in ohclone.log:

Executing command sh -c "/opt/acfs/R12.2/fs2/EBSapps/10.1.2/oui/bin/runInstaller -printdiskusage -ignoreDiskWarning -debug -clone -silent -force -nolink -waitForCompletion -invPtrLoc /etc/oraInst.loc session:ORACLE_HOME=/opt/acfs/R12.2/fs2/EBSapps/10.1.2
...
Error returned: Value too large for defined data type
There is not enough space on the volume you have specified. Oracle Universal Installer has detected that you currently have 0 MB available on the chosen volume. 750 MB of space is required for the software.

It helps but not a lot. We could find document on MOS: rapidwiz File System Upgrade Fails With “There is not enough space on the volume you have specified.” Even Though There Is Sufficient Space (Doc ID 1942808.1)
There it was recommended to decrease size of NFS-mounted disk from 36 Tb to 2 Tb. The problem is I have 7 Tb ext4 volume where all works great. It looks like ACFS-specific issue.

After a couple of days investigation I noticed some strange strings in the strace output:

23174 statfs("/opt/acfs/R12.2/fs2/EBSapps/10.1.2", 0x5621f54c) = -1 EOVERFLOW (Value too large for defined data type)
23174 write(2, "Error returned: Value too large for defined data type\n", 54) = 54

So the error message seems to have gotten after statfs system call. I reproduced the issue by using a simple C code:

#include
#include
#include
#include
#include
#include
#include

main(int argc, char **argv) {
char fn[]="/opt/acfs/R12.2/fs2/EBSapps/10.1.2";
char *init_d;
struct statfs info;
int file_descriptor;
init_d = *++argv;
printf("Filesystem to check: %s\n", init_d);
file_descriptor = statfs(init_d, &info);
int errsv = errno;
printf(" Result: %d error string: %s\n", file_descriptor,strerror(errsv));
printf(" Type of filesystem: %08x\n", (int) info.f_type);
printf(" Optimal transfer block size: %lld\n", (long long) info.f_blocks);
printf("Total data blocks in filesystem: %lld\n", (long long) info.f_blocks);
printf(" Free blocks in filesystem: %lld\n", (long long) info.f_bfree);
printf(" Free blocks available: %lld\n", (long long) info.f_bavail);
printf(" Total file nodes in filesystem: %d\n", (int) info.f_files);
printf(" Free file nodes in filesystem: %d\n", (int) info.f_ffree);
printf(" Maximum length of filenames: %d\n", (int) info.f_namelen);
printf(" Fragment size: %d\n", (int) info.f_frsize);
printf(" Mount flags of filesystem: %d\n", (int) info.f_flags);
}

This code should be compiled as a 32-bit application (in a 64 bit mode it works well).

The error output:

[oracle@appltest01e.oebs.yandex.net ~]$ ./test_statfs /opt/acfs
Filesystem to check: /opt/acfs
Result: -1 error string: Value too large for defined data type
Type of filesystem: 00000000
Optimal transfer block size: 1224734968
Total data blocks in filesystem: 1224734968
Free blocks in filesystem: 1
Free blocks available: 13238272
Total file nodes in filesystem: 0
Free file nodes in filesystem: 15774463
Maximum length of filenames: 20468
Fragment size: 1226387928
Mount flags of filesystem: 0

The real problem here is the number of inodes. I compared ext4 and acfs and the number of inodes in our ext4 filesystems are usually small and could be written in unsigned longint:

$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/md1 242872320 5583659 237288661 3% /opt

whereas for acfs:

$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/asm/acfs-116 4294967296 836130432 3458836864 20% /opt/acfs

The differences are very lucidly explained in Doc ID 2026700.1 Does ACFS Use inode architecture?:

ACFS filesystems does not use the inode architecture.
ACFS filesystems does not have a pre-allocated inode table.
Therefore, “df -i” command returns the number of inodes that are theoretically possible given the space remaining.
On ACFS filesystems, the inode table grows dynamically. Any free storage is eligible for inode creation.

The number of inodes in my case (4294967296) cannot be stored as unsigned longint. And it is twice more than disk volume size.

So we just resized ACFS volume

acfsutil size -100G /opt/acfs

Now all works fine. There is only one unpleasant feature – 2 Tb limit of disk volume. Hope OeBS team will change their mind about ACFS support someday and we will be able to store all the data in a one volume.

ACFS and OeBS 12.2