On peut ajouter un couche de sécurité via htaccess …
Attention il faut aussi installer bc :
$ sudo apt-get install bc
Sinon on a une erreur :
2018/09/21-19:10:12 [5846] Error output from freeboxv5_uptime:
2018/09/21-19:10:12 [5846] /etc/munin/plugins/freeboxv5_uptime: line 143: bc: command not found
# sudo service elasticsearch start
# sudo service elasticsearch status
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled)
Active: failed (Result: exit-code) since mer. 2018-09-19 18:07:39 UTC; 2s ago
Docs: http://www.elastic.co
Process: 5873 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet -Edefault.path.logs=${LOG_DIR} -Edefault.path.data=${DATA_DIR} -Edefault.path.conf=${CONF_DIR} (code=exited, status=1/FAILURE)
Process: 5869 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 5873 (code=exited, status=1/FAILURE)
sept. 19 18:07:39 osmc elasticsearch[5873]: Error occurred during initialization of VM
sept. 19 18:07:39 osmc elasticsearch[5873]: Could not reserve enough space for 2097152KB object heap
sept. 19 18:07:39 osmc systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
sept. 19 18:07:39 osmc systemd[1]: Unit elasticsearch.service entered failed state.
Misère … JAVA commence à me gonfler … Modification du fichier /etc/elasticsearch/jvm.options :
# cat /etc/elasticsearch/jvm.options | grep Xm
## -Xms4g
## -Xmx4g
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
#-Xms2g
-Xms200m
#-Xmx2g
-Xmx500m
Nouveau test :
# sudo service elasticsearch start
# sudo service elasticsearch status
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled)
Active: active (running) since mer. 2018-09-19 18:11:26 UTC; 3s ago
Docs: http://www.elastic.co
Process: 5940 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 5944 (java)
CGroup: /system.slice/elasticsearch.service
└─5944 /usr/bin/java -Xms200m -Xmx500m -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava....
Etape n°4 : Installation de logstash :
# sudo wget https://artifacts.elastic.co/downloads/logstash/logstash-5.5.2.deb
# sudo dpkg -i logstash-5.5.2.deb
Sélection du paquet logstash précédemment désélectionné.
(Lecture de la base de données... 26506 fichiers et répertoires déjà installés.)
Préparation du dépaquetage de logstash-5.5.2.deb ...
Dépaquetage de logstash (1:5.5.2-1) ...
Paramétrage de logstash (1:5.5.2-1) ...
Using provided startup.options file: /etc/logstash/startup.options
Java HotSpot(TM) Client VM warning: TieredCompilation is disabled in this release.
io/console on JRuby shells out to stty for most operations
/usr/share/logstash/vendor/bundle/jruby/1.9/gems/pleaserun-0.0.30/lib/pleaserun/installer.rb:46 warning: executable? does not in this environment and will return a dummy value
Successfully created system startup script for Logstash
Etape n°5 : Installation de JFFI :
# sudo apt-get install ant
# sudo apt-get install git
# sudo git clone https://github.com/jnr/jffi.git
# cd jffi
# sudo ant jar
# sudo ant jar
Buildfile: /root/jffi/build.xml
-pre-init:
-init-vars:
[mkdir] Created dir: /root/jffi/build/jni
-post-init:
-init:
-pre-jar:
-pre-compile:
-do-compile:
[mkdir] Created dir: /root/jffi/build/classes
[javac] Compiling 42 source files to /root/jffi/build/classes
[javac] warning: [options] bootstrap class path not set in conjunction with -source 1.6
[javac] /root/jffi/src/main/java/com/kenai/jffi/MemoryIO.java:847: warning: Unsafe is internal proprietary API and may be removed in a future release
[javac] protected static sun.misc.Unsafe unsafe = sun.misc.Unsafe.class.cast(getUnsafe());
[javac] ^
[javac] /root/jffi/src/main/java/com/kenai/jffi/MemoryIO.java:847: warning: Unsafe is internal proprietary API and may be removed in a future release
[javac] protected static sun.misc.Unsafe unsafe = sun.misc.Unsafe.class.cast(getUnsafe());
[javac] ^
[javac] Note: /root/jffi/src/main/java/com/kenai/jffi/ClosureMagazine.java uses or overrides a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 3 warnings
-generate-version-source:
[echo] Generating Version.java
[mkdir] Created dir: /root/jffi/build/java/com/kenai/jffi
-generate-version:
[javac] Compiling 1 source file to /root/jffi/build/classes
[javac] warning: [options] bootstrap class path not set in conjunction with -source 1.6
[javac] 1 warning
-compile-java:
-generate-native-headers:
-build-native-library:
BUILD FAILED
/root/jffi/build.xml:344: Execute failed: java.io.IOException: Cannot run program "make": error=2, Aucun fichier ou dossier de ce type
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at java.lang.Runtime.exec(Runtime.java:620)
at org.apache.tools.ant.taskdefs.launcher.Java13CommandLauncher.exec(Java13CommandLauncher.java:58)
at org.apache.tools.ant.taskdefs.Execute.launch(Execute.java:428)
at org.apache.tools.ant.taskdefs.Execute.execute(Execute.java:442)
at org.apache.tools.ant.taskdefs.ExecTask.runExecute(ExecTask.java:628)
at org.apache.tools.ant.taskdefs.ExecTask.runExec(ExecTask.java:669)
at org.apache.tools.ant.taskdefs.ExecTask.execute(ExecTask.java:495)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:435)
at org.apache.tools.ant.Target.performTasks(Target.java:456)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1393)
at org.apache.tools.ant.Project.executeTarget(Project.java:1364)
at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
at org.apache.tools.ant.Project.executeTargets(Project.java:1248)
at org.apache.tools.ant.Main.runBuild(Main.java:851)
at org.apache.tools.ant.Main.startAnt(Main.java:235)
at org.apache.tools.ant.launch.Launcher.run(Launcher.java:280)
at org.apache.tools.ant.launch.Launcher.main(Launcher.java:109)
Caused by: java.io.IOException: error=2, Aucun fichier ou dossier de ce type
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:247)
at java.lang.ProcessImpl.start(ProcessImpl.java:134)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
... 23 more
Total time: 11 seconds
On va essayer un plan B :
# sudo apt-get install zip
# cd /usr/share/logstash/vendor/jruby/lib
sudo zip -g jruby.jar jni/arm-Linux/libjffi-1.2.so
updating: jni/arm-Linux/libjffi-1.2.so
zip warning: Local Entry CRC does not match CD: jni/arm-Linux/libjffi-1.2.so
(deflated 63%)
Je croise les doigts … lancement :
# sudo service logstash start
# sudo service logstash status
● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; disabled)
Active: active (running) since mer. 2018-09-19 18:33:29 UTC; 9s ago
Main PID: 6431 (java)
CGroup: /system.slice/logstash.service
└─6431 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=...
# sudo service kibana start
# sudo service kibana status
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; disabled)
Active: active (running) since mer. 2018-09-19 18:50:09 UTC; 2s ago
Main PID: 7396 (node)
CGroup: /system.slice/kibana.service
└─7396 /opt/kibana/kibana-5.5.2-linux-x86/bin/../node/bin/node --no-warnings /opt/kibana/kibana-5.5.2-linux-x86/bin/../src/cli
Etape n°6 : Installation de NGinx :
# sudo apt-get install nginx apache2-utils
# sudo htpasswd -c /etc/nginx/htpasswd.users kibana_admin
New password:
Re-type new password:
Adding password for user kibana_admin
Modification de /etc/nginx/sites-available/default :
Etape n°7 : Lancement de tous les services :
root@osmc:~# sudo service logstash restart && sudo service elasticsearch restart && sudo service kibana restart && sudo service nginx start
root@osmc:~# sudo service logstash status
● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; disabled)
Active: active (running) since mer. 2018-09-19 18:56:55 UTC; 1min 25s ago
Main PID: 7933 (java)
CGroup: /system.slice/logstash.service
└─7933 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=...
root@osmc:~#
root@osmc:~# sudo service elasticsearch status
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled)
Active: failed (Result: signal) since mer. 2018-09-19 18:58:30 UTC; 49s ago
Docs: http://www.elastic.co
Process: 7960 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet -Edefault.path.logs=${LOG_DIR} -Edefault.path.data=${DATA_DIR} -Edefault.path.conf=${CONF_DIR} (code=killed, signal=KILL)
Process: 7956 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 7960 (code=killed, signal=KILL)
sept. 19 18:58:30 osmc systemd[1]: elasticsearch.service: main process exited, code=killed, status=9/KILL
sept. 19 18:58:30 osmc systemd[1]: Unit elasticsearch.service entered failed state.
root@osmc:~# sudo service kibana status
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; disabled)
Active: active (running) since mer. 2018-09-19 18:56:55 UTC; 2min 40s ago
Main PID: 7985 (node)
CGroup: /system.slice/kibana.service
└─7985 /opt/kibana/kibana-5.5.2-linux-x86/bin/../node/bin/node --no-warnings /opt/kibana/kibana-5.5.2-linux-x86/bin/../src/cli
root@osmc:~# sudo service nginx status
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled)
Active: active (running) since mer. 2018-09-19 18:54:47 UTC; 4min 59s ago
Main PID: 7783 (nginx)
CGroup: /system.slice/nginx.service
├─7783 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
├─7784 nginx: worker process
├─7785 nginx: worker process
├─7786 nginx: worker process
└─7787 nginx: worker process
Il y en a 1/4 qui ne fonctionne pas … Misère.
root@osmc:~# sudo service elasticsearch start
root@osmc:~# sudo service elasticsearch status
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled)
Active: active (running) since mer. 2018-09-19 19:00:04 UTC; 43s ago
Docs: http://www.elastic.co
Process: 8208 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 8213 (java)
CGroup: /system.slice/elasticsearch.service
└─8213 /usr/bin/java -Xms200m -Xmx500m -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava...
Vive le Java … et ensuite il plante, sans laisser trop de logs :
root@osmc:~# tail -f /var/log/elasticsearch/elasticsearch.log
[2018-09-19T19:00:44,349][INFO ][o.e.n.Node ] initialized
[2018-09-19T19:00:44,350][INFO ][o.e.n.Node ] [feSXsTX] starting ...
[2018-09-19T19:00:45,591][INFO ][o.e.t.TransportService ] [feSXsTX] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2018-09-19T19:00:45,699][WARN ][o.e.b.BootstrapChecks ] [feSXsTX] initial heap size [209715200] not equal to maximum heap size [524288000]; this can cause resize pauses and prevents mlockall from locking the entire heap
[2018-09-19T19:00:45,700][WARN ][o.e.b.BootstrapChecks ] [feSXsTX] system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
[2018-09-19T19:00:48,977][INFO ][o.e.c.s.ClusterService ] [feSXsTX] new_master {feSXsTX}{feSXsTXeQw-AEPi_pWmySw}{FlzLJ3stTwO--_vZD3nxLw}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2018-09-19T19:00:49,201][INFO ][o.e.h.n.Netty4HttpServerTransport] [feSXsTX] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2018-09-19T19:00:49,202][INFO ][o.e.n.Node ] [feSXsTX] started
[2018-09-19T19:00:50,662][INFO ][o.e.g.GatewayService ] [feSXsTX] recovered [1] indices into cluster_state
[2018-09-19T19:00:54,270][INFO ][o.e.c.r.a.AllocationService] [feSXsTX] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
J’ai donc fait un changement sur la mémoire :
[2018-09-19T19:08:50,943][INFO ][o.e.n.Node ] JVM arguments [-Xms100m, -Xmx300m, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
[2018-09-19T19:08:59,377][INFO ][o.e.p.PluginsService ] [feSXsTX] loaded module [aggs-matrix-stats]
[2018-09-19T19:08:59,378][INFO ][o.e.p.PluginsService ] [feSXsTX] loaded module [ingest-common]
[2018-09-19T19:08:59,379][INFO ][o.e.p.PluginsService ] [feSXsTX] loaded module [lang-expression]
[2018-09-19T19:08:59,380][INFO ][o.e.p.PluginsService ] [feSXsTX] loaded module [lang-groovy]
[2018-09-19T19:08:59,381][INFO ][o.e.p.PluginsService ] [feSXsTX] loaded module [lang-mustache]
[2018-09-19T19:08:59,382][INFO ][o.e.p.PluginsService ] [feSXsTX] loaded module [lang-painless]
[2018-09-19T19:08:59,383][INFO ][o.e.p.PluginsService ] [feSXsTX] loaded module [parent-join]
[2018-09-19T19:08:59,384][INFO ][o.e.p.PluginsService ] [feSXsTX] loaded module [percolator]
[2018-09-19T19:08:59,384][INFO ][o.e.p.PluginsService ] [feSXsTX] loaded module [reindex]
[2018-09-19T19:08:59,385][INFO ][o.e.p.PluginsService ] [feSXsTX] loaded module [transport-netty3]
[2018-09-19T19:08:59,386][INFO ][o.e.p.PluginsService ] [feSXsTX] loaded module [transport-netty4]
[2018-09-19T19:08:59,389][INFO ][o.e.p.PluginsService ] [feSXsTX] no plugins loaded
[2018-09-19T19:09:10,792][INFO ][o.e.d.DiscoveryModule ] [feSXsTX] using discovery type [zen]
[2018-09-19T19:09:14,675][INFO ][o.e.n.Node ] initialized
[2018-09-19T19:09:14,677][INFO ][o.e.n.Node ] [feSXsTX] starting ...
[2018-09-19T19:09:15,785][INFO ][o.e.t.TransportService ] [feSXsTX] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2018-09-19T19:09:15,878][WARN ][o.e.b.BootstrapChecks ] [feSXsTX] initial heap size [104857600] not equal to maximum heap size [314572800]; this can cause resize pauses and prevents mlockall from locking the entire heap
[2018-09-19T19:09:15,879][WARN ][o.e.b.BootstrapChecks ] [feSXsTX] system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
[2018-09-19T19:09:19,189][INFO ][o.e.c.s.ClusterService ] [feSXsTX] new_master {feSXsTX}{feSXsTXeQw-AEPi_pWmySw}{GJAcwscZQNacEta1vC5mPA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2018-09-19T19:09:19,320][INFO ][o.e.h.n.Netty4HttpServerTransport] [feSXsTX] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2018-09-19T19:09:19,321][INFO ][o.e.n.Node ] [feSXsTX] started
[2018-09-19T19:09:20,504][INFO ][o.e.g.GatewayService ] [feSXsTX] recovered [1] indices into cluster_state
[2018-09-19T19:09:21,932][INFO ][o.e.c.r.a.AllocationService] [feSXsTX] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
En fonction de votre OS prendre la bonne version (Pour moi c’es Mac OS en Version Yosemite):
Pour passer ce problème il faut appuyer sur la touche « CTRL » et ouvrir.
Et je me retrouve bloqué sur cette interface car la SD Card n’est pas visible.
Pourtant quand je regarde les logs, j’ai ceci :
jeu. sept. 22 17:53:18 2016 New disk device entry created with entry point /dev/rdisk3, 15.9 GB free space and label
jeu. sept. 22 17:53:18 2016 =================================================
jeu. sept. 22 17:53:18 2016 Starting to parse /dev/rdisk3 for additional info
jeu. sept. 22 17:53:18 2016 MediaName-Line: Device / Media Name: Apple SDXC Reader Media
jeu. sept. 22 17:53:18 2016 Protocol-Line: Protocol: Secure Digital
jeu. sept. 22 17:53:18 2016 Determined Secure Digital as protocol for /dev/rdisk3
jeu. sept. 22 17:53:18 2016 Decided to be a DMG: no
jeu. sept. 22 17:53:18 2016 R/O-Line: Read-Only Media: Yes
jeu. sept. 22 17:53:18 2016 parsed/split/simplified readOnly line would have been: Yes
jeu. sept. 22 17:53:18 2016 Determined Yes as readOnlyMedia for /dev/rdisk3
jeu. sept. 22 17:53:18 2016 Decided to be r/o: yes
jeu. sept. 22 17:53:18 2016 Ejectable-Line: Ejectable: Yes
jeu. sept. 22 17:53:18 2016 Determined Yes as ejactableProperty for /dev/rdisk3
jeu. sept. 22 17:53:18 2016 Decided that /dev/rdisk3 is not writable to us
jeu. sept. 22 17:53:18 2016 Parsed device as NON-writable. NOT Appending.
jeu. sept. 22 17:53:18 2016
jeu. sept. 22 17:53:18 2016 Finished parsing additional info for /dev/rdisk3
jeu. sept. 22 17:53:18 2016 =================================================
Je regarde ce qu’il y a sur la carte SD avec la commande « diskutil list » :
diskutil list/dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: FDisk_partition_scheme *15.9 GB disk3 1: Windows_FAT_16 RECOVERY 1.2 GB disk3s1 2: Linux 33.6 MB disk3s5
Je demonte la carte, avec la commande « diskutil unmountDisk » :
#diskutil unmountDisk /dev/disk3
Unmount of all volumes on disk3 was successful
Je regarde les formats pris en charge :
#diskutil listFilesystems
Formattable file systems
These file system personalities can be used for erasing and partitioning.
When specifying a personality as a parameter to a verb, case is not considered.
Certain common aliases (also case-insensitive) are listed below as well.
-------------------------------------------------------------------------------
PERSONALITY USER VISIBLE NAME
-------------------------------------------------------------------------------
ExFAT ExFAT
UFSD_EXTFS Extended Filesystem 2
UFSD_EXTFS3 Extended Filesystem 3
UFSD_EXTFS4 Extended Filesystem 4
Free Space Free Space
(or) free
MS-DOS MS-DOS (FAT)
MS-DOS FAT12 MS-DOS (FAT12)
MS-DOS FAT16 MS-DOS (FAT16)
MS-DOS FAT32 MS-DOS (FAT32)
(or) fat32
HFS+ Mac OS Extended
Case-sensitive HFS+ Mac OS Extended (Case-sensitive)
(or) hfsx
Case-sensitive Journaled HFS+ Mac OS Extended (Case-sensitive, Journaled)
(or) jhfsx
Journaled HFS+ Mac OS Extended (Journaled)
(or) jhfs+
UFSD_NTFS Windows NT Filesystem
UFSD_NTFSCOMPR Windows NT Filesystem (compressed)
Deux erreurs : « Permission denied » avec deux cartes SD différentes.
Je choisi de changer de Mac et d’essayer avec une version Mac OS El Captain :
Cela part bien car le formatage de la SD Card est possible :
Je tente à nouveau l’installation en mode graphique, et cela fonctionne. Visiblement Yosemite a du mal a écrire sur les cartes micro-SD.
Avec El Capitan :
Maintenant il va falloir faire le test, mais avant je regarde les formats pris en charge par El Capitan :
$ diskutil listFilesystems
Formattable file systems
These file system personalities can be used for erasing and partitioning.
When specifying a personality as a parameter to a verb, case is not considered.
Certain common aliases (also case-insensitive) are listed below as well.
-------------------------------------------------------------------------------
PERSONALITY USER VISIBLE NAME
-------------------------------------------------------------------------------
ExFAT ExFAT
Free Space Free Space
(or) free
MS-DOS MS-DOS (FAT)
MS-DOS FAT12 MS-DOS (FAT12)
MS-DOS FAT16 MS-DOS (FAT16)
MS-DOS FAT32 MS-DOS (FAT32)
(or) fat32
HFS+ Mac OS Extended
Case-sensitive HFS+ Mac OS Extended (Case-sensitive)
(or) hfsx
Case-sensitive Journaled HFS+ Mac OS Extended (Case-sensitive, Journaled)
(or) jhfsx
Journaled HFS+ Mac OS Extended (Journaled)
(or) jhfs+
J’ai du mal à comprendre pourquoi la carte microSD ne pouvait être que lu sur Yosemite.
Nous utilisons des cookies pour vous garantir la meilleure expérience sur notre site. Si vous continuez à utiliser ce dernier, nous considérerons que vous acceptez l'utilisation des cookies.Ok