3/21/2017

SD cards and Android Phones

I'm the owner of a Asus Zenfone Max 2, and I'm pretty happy with it, because it has 3 things that are important to me: Dual Sim (i'm oftenly travelling , so i always carrying with me a couple of Sims), MicroSD slot and good capacity battery (5000 mAH).

I bought a 64gb MicroSD card to expand the storage capacity of my cellphone, but i started to have some issues:

- I found that bigger cards (64, 128, 256 gb), are formatted with the extfat filesystem, that sometimes is not recognized by some devices. Here's a nice post that explains the different types of filesystems for SD cards.

- There're some solutions that requires to enable root user in the phone, but I was not planning to do that, at least in the short period.

Searching on internet I found that a workaround for this issue, is to format the card with the FAT32 filesystems. The only disadvantage of using this option, is that the FAT32 doesn't support files bigger than 4gb. I was not planning to use files of that size, so the workaround is suitable for me :) .

By default if you are a windows user , the format utility will format the device either extfat or ntfs, so you need to download a 3rd party tool to do the trick. Here's the link to download the tool.

If you are Linux user like me, you can do the trick with the gparted tool, available in most linux distributions. You just need to choose the sd card in the menu, delete the partition (default extfat partition ) and create a new one, choosing the fat32 filesytem.

Now you should be able to use the card with your cellphone :)

PD (In both scenarios you need to connect the SD card to your computer/laptop using the microsd adaptor that comes with your card or a USB card reader).

Thanks for visiting !!

Rodolfo



3/06/2017

OpenStack deployment Solaris 11.3 - Part 3/4

Now I will try to used a supported server for the kernel zone, in my case a T4-1 server (part of the T4 series, described as a supported platform)

root@t4-1:~$ prtdiag -v | grep System
System Configuration:  Oracle Corporation  sun4v SPARC T4-1
Sun System Firmware 8.9.5.a 2016/08/09 15:24

We can confirm also that kernel zones are supported

root@t4-1:~$ virtinfo
NAME            CLASS
logical-domain  current
non-global-zone supported
kernel-zone     supported
logical-domain  supported

Virtual CPU'S: 64

root@t4-1:~$ psrinfo | wc -l
      64

root@t4-1:~$ prtdiag -v | grep Memory
Memory size: 65024 Megabytes

root@t4-1:~$ more /etc/release
                            Oracle Solaris 11.3 SPARC
  Copyright (c) 1983, 2016, Oracle and/or its affiliates.  All rights reserved.
                            Assembled 03 August 2016
root@t4-1:~$

root@t4-1:~# zoneadm list -cv
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared

I will create a kernel zone and will assign more memory and vcpus (will try to give enough resource for vm/zones creation), following oracle documentation

https://docs.oracle.com/cd/E65465_01/html/E57770/kzuar.html#scrolltoc

root@t4-1:~# zonecfg -z openstack
Use 'create' to begin configuring a new zone.
zonecfg:openstack> create -t SYSsolaris-kz
zonecfg:openstack> select virtual-cpu
zonecfg:openstack:virtual-cpu> set ncpus=36
zonecfg:openstack:virtual-cpu> end
zonecfg:openstack> select capped-memory
zonecfg:openstack:capped-memory> set physical=40g
zonecfg:openstack:capped-memory> end
zonecfg:openstack> verify
zonecfg:openstack> exit
root@t4-1:~#

Now we can see installation process

root@t4-1: # zoneadm -z openstack install -a sol-11_3-openstack-sparc.uar -x install-size=60g

Progress being logged to /var/log/zones/zoneadm.20170302T200621Z.openstack.install
[Connected to zone 'openstack' console]
NOTICE: Entering OpenBoot.
NOTICE: Fetching Guest MD from HV.
NOTICE: Starting additional cpus.
NOTICE: Initializing LDC services.
NOTICE: Probing PCI devices.
NOTICE: Finished PCI probing.

SPARC T4-1, No Keyboard
Copyright (c) 1998, 2016, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.38.5, 40.0000 GB memory available, Serial #327404094.
Ethernet address 0:0:0:0:0:0, Host ID: 1383ca3e.

Boot device: disk1  File and args: - install aimanifest=/system/shared/ai.xml
SunOS Release 5.11 Version 11.3 64-bit
Copyright (c) 1983, 2015, Oracle and/or its affiliates. All rights reserved.

### I saw some network warnings

Mar  2 20:06:44 auditd[15]: adt_get_local_address couldn't get 26 addrlist socket: Address family not supported by protocol family: Bad file number.
Mar  2 20:06:44 auditd[15]: adt_get_local_address failed, no Audit IP address available, faking loopback for  and error Network is down.
Remounting root read/write
Probing for device nodes ...

Mar  2 20:06:45 auditd[29]: getaddrinfo() failed[node name or service name not known].
Mar  2 20:06:45 auditd[29]: adt_get_local_address couldn't get 26 addrlist socket: Address family not supported by protocol family: Bad file number.
Mar  2 20:06:45 auditd[29]: adt_get_local_address failed, no Audit IP address available, faking loopback for  and error Network is down.
Mar  2 20:06:46 auditd[44]: getaddrinfo() failed[node name or service name not known].
Mar  2 20:06:48 auditd[44]: adt_get_local_address failed, no Audit IP address available, faking loopback for  and error Network is down.

and then unpacking the UAR file .....

Preparing image for use
Done mounting image
Configuring devices.
Hostname: solaris
Using specified install manifest : /system/shared/ai.xml

solaris console login:
Automated Installation started
The progress of the Automated Installation will be output to the console
Detailed logging is in the logfile at /system/volatile/install_log
Press RETURN to get a login prompt at any time.

20:08:22    Install Log: /system/volatile/install_log
20:08:22    Using XML Manifest: /system/volatile/ai.xml
20:08:22    Using profile specification: /system/volatile/profile
20:08:22    Starting installation.
20:08:22    0% Preparing for Installation
20:08:22    100% manifest-parser completed.
20:08:22    100% None
20:08:22    0% Preparing for Installation
20:08:23    1% Preparing for Installation
20:08:23    2% Preparing for Installation
20:08:23    3% Preparing for Installation
20:08:23    4% Preparing for Installation
20:08:24    5% archive-1 completed.
20:08:24    6% install-env-configuration completed.
20:08:26    9% target-discovery completed.
20:08:29    Pre-validating manifest targets before actual target selection
20:08:29    Selected Disk(s) : c1d0
20:08:29    Pre-validation of manifest targets completed
20:08:29    Validating combined manifest and archive origin targets
20:08:29    Selected Disk(s) : c1d0
20:08:29    9% target-selection completed.
20:08:29    10% ai-configuration completed.
..............
..............
..............
20:11:25    96% boot-archive completed.
20:11:26    Setting boot devices in firmware
20:11:26    Setting openprom boot-device
20:11:26    98% boot-configuration completed.
20:12:32    98% update-filesystem-owner-group completed.
20:12:32    98% transfer-ai-files completed.
20:12:32    98% cleanup-archive-install completed.
20:12:32    100% create-snapshot completed.
20:12:32    100% None
20:12:33    Automated Installation succeeded.
20:12:33    You may wish to reboot the system at this time.
Automated Installation finished successfully
The system can be rebooted now
Please refer to the /system/volatile/install_log file for details
After reboot it will be located at /var/log/install/install_log

[NOTICE: Zone halted]

[Connection to zone 'openstack' console closed]
        Done: Installation completed in 365.752 seconds.

Then I booted the zone and finished setup

root@t4-1-tvp540-i:/export/home/jack# zoneadm -z openstack boot
root@t4-1-tvp540-i:/export/home/jack# zlogin -C openstack
[Connected to zone 'openstack' console]
254/254
Configuring devices.
SC profile successfully generated as:
/etc/svc/profile/sysconfig/sysconfig-20170302-134943/sc_profile.xml

Exiting System Configuration Tool. Log is available at:
/system/volatile/sysconfig/sysconfig.log.320
Loading smf(5) service descriptions: 2/2
Hostname: openstack

openstack console login:

Once I logged in, I can verify that there're several OpenStack services installed

root@openstack:~$ svcs -a | grep openstack | wc -l
      58

I also realized that there were some disabled services

root@openstack:~$ svcs -a | grep openstack

disabled       21:49:35 svc:/application/openstack/heat/heat-db:default
disabled       21:49:35 svc:/application/openstack/heat/heat-api-cloudwatch:default
disabled       21:49:36 svc:/application/openstack/heat/heat-api-cfn:default
disabled       21:49:37 svc:/application/openstack/ironic/ironic-db:default
disabled       21:49:37 svc:/application/openstack/ironic/ironic-api:default
disabled       21:49:37 svc:/application/openstack/heat/heat-api:default
disabled       21:49:38 svc:/application/openstack/swift/swift-container-reconciler:default
disabled       21:49:38 svc:/application/openstack/cinder/cinder-backup:default
disabled       21:49:38 svc:/application/openstack/heat/heat-engine:default
disabled       21:49:38 svc:/application/openstack/ironic/ironic-conductor:default
disabled       21:49:39 svc:/application/openstack/neutron/neutron-l3-agent:default

I assumed that they are not required for a basic OpenStack deployment, which is the purpose of this post.

To get access to the OpenStack login page we need to go to: http://your_kernel_zoneip/horizon

The user and pass are admin and secrete, as is described in this doc

https://docs.oracle.com/cd/E65465_01/html/E57770/dashboard-access.html#scrolltoc




OpenStack deployment Solaris 11.3 - Part 4/4

We need to login to our OpenStack instance (http://ip_zone/horizon) with the credentials mentioned before.
User: admin, Password: secrete

After we login into the OpenStack Dashboard, we can see a couple of things:

From Systems -> Hypervisors, we can see the resources available that are the same that we define, when the kernel zone was created (capped memory, disk installation and virtual cpus).





The structure of OpenStack is based on Projects and Users. By default for this installation, we have the project Demo and the user Admin, member of this groups.

Now that we have access to our OpenStack dashboard, we can proceed to create some vm's / zones



CONCLUSIONS AND FINAL THOUGHTS

- There're several methods that can be used to deploy OpenStack on Solaris, your choice depends in what you will like to achieve and your resources (If you have a server where you can make a bare-metal installation it's OK, and if you can test this solution only as a zone, that's an option too).

- If you are planning to deploy Kernel zones, you need to make sure that your hardware (SPARC or X64), is able to support this type of technology.

USEFUL LINKS

- Kernel Zones Solaris 11.2 Hardware requirements
- Kernel Zones Solaris 11.3 Hardware requirements
- Installing and Configuring OpenStack (Juno) in Oracle Solaris
- Configuring the Sol 11.3 OpenStack UAR - Tim Wort

Thanks for visiting

Rodolfo

INDEX

OpenStack deployment Solaris 11.3 - Part 1/4
OpenStack deployment Solaris 11.3 - Part 2/4
OpenStack deployment Solaris 11.3 - Part 3/4
OpenStack deployment Solaris 11.3 - Part 4/4






OpenStack deployment Solaris 11.3 - Part 2/4

The OpenStack deployment sometimes can be a bit overwhelming, because it has a lot of components that interact between each other and the deployment can include several servers that will be running the services mentioned in the previous post (cinder, horizon, swift, glance, etc).

Many production environments can be built, by one server to be used as a compute node (running nova and neutron services for example) and a controller node (running cinder and glance for example) and also several storage appliances included (ZFS appliance, iSCSI server or fiber channel storage), as a part of the architecture. 

We can find several deployment scenarios and all depends on what we want to achieve and also our budget.

You can find here some interesting links about OpenStack Deployment requirements: 

- OpenStack Operations Guide
- Hardware requirements of OpenStack

There's a good way to test this virtualization solution without suffering from the implementation, and this is using all-in-one setups. This type of configurations provides an easy way to check this solution, most of the time providing virtual machines, with all the OpenStack services running in the same OS instance.

You can find here a useful link with this type of tools.

There's a Solaris all-in-one solution, provided by Oracle that is distributed as UAR file. UAR files (Unified Archives) are files created by the archiveadm utility that clones a Solaris instance, including any zone running in the OS. This provides an easy way to clone systems and accelerate Solaris deployments.

The UAR file with the Solaris OS, running OpenStack services, can be downloaded from http://www.oracle.com/technetwork/server-storage/solaris11/downloads/unified-archives-2245488.html 

We will use the SPARC uar file, because our testing server will be SPARC box.

DEPLOYMENT

As I mentioned before our testing server will be a SPARC box, running Solaris 11.3.

root@t3-2:~# more /etc/release
                            Oracle Solaris 11.3 SPARC
  Copyright (c) 1983, 2016, Oracle and/or its affiliates.  All rights reserved.
                            Assembled 03 August 2016


root@t3-2:~#  prtdiag -v | grep 'System'
System Configuration:  Oracle Corporation  sun4v SPARC T3-2
Sun System Firmware 8.3.11 2015/06/04 07:29
====================== System PROM revisions =======================
root@t3-2:~#

Virtual CPU'S: 256

root@t3-2:~# psrinfo | wc -l
     256

root@t3-2:~# prtdiag -v | grep 'Memory'
Memory size: 261632 Megabytes

I was planning to use the kernel zone option, as is described in the following document


But in the moment I was trying to config and install the kernel zone, I was getting some errors

root@t3-2-syd04-b:~# zoneadm -z openstack install -a sol-11_3-openstack-sparc.uar -x install-size=50g
Platform does not support the kernel-zone brand.
zoneadm: zone openstack failed to verify

I realized that this server was not having support for kernel zones

root@t3-2:~# virtinfo
NAME            CLASS
logical-domain  current
non-global-zone supported
logical-domain  supported
root@t3-2-syd04-b:~#

I also confirmed this information from this document: 

Hardware and Software Requirements for Oracle Solaris Kernel Zones


The physical machine must meet the following requirements.

SPARC based systems:

A SPARC T4 series server with at least System Firmware 8.8.

A SPARC T5, SPARC M5, or SPARC M6 series server with at least System Firmware 9.5.

A SPARC T7 or SPARC M7 series server. All firmware versions are supported.

To run kernel zones, a Fujitsu M10 or SPARC M10 server with XCP Firmware 2230 or newer and Oracle Solaris 11.3 or newer.

Unfortunately, this is T3 series server (T3-2) and doesn't support kernel zones :( .

After this scenario I was thinking other options:

1) To choose alternative deployment mentioned in the document (the AI alternative). I disregard this option, because is not the easiest way to accomplish our purpose (fast deployment and testing), because I don't have an automated installer server (AI server) configured and this option makes a baremetal installation of the OpenStack solution.

2) I was thinking to create a non-global zone and install all the OpenStack services from the  repository, this allows me to use already a server without using some much resources. This looks like a good option, but we have to configure a couple of things in order to have a working deployment. 

3) Finally the last option was use a server that support kernel zones and continue with the original plan. I decided to choose this option, for the original scope of this post.

Points 1) and 2) can be worked in other post entries :).

I will continue with the OpenStack configuration in the next post.




OpenStack deployment Solaris 11.3 - Part 1/4

INTRODUCTION

OpenStack have been in the latest years, a key competitor in the virtualization world, providing a solid and versatile solution. 

This technology has been mainly deployed in Linux environments and since a couple of years has been introduced in the Solaris OS, as an alternative to deploy zone virtualization in a friendly way, adding all the benefits of the OpenStack Architecture (Data Redundancy, High Availability,  Virtualization templates, among other things).

For those who are not familiar with this virtualization solution, here you can find a diagram of the main OpenStack components and an explanation of each one of them.




Nova

The Nova compute virtualization service provides a cloud computing fabric controller that supports a variety of virtualization technologies. In Oracle Solaris, virtual machine (VM) instances are kernel zones or non-global zones. Zones are scalable high-density virtual environments with low virtualization overhead. Kernel zones also provide independent kernel versions, enabling independent upgrade of VM instances, which is desirable for a multi-project cloud.

Neutron

The Neutron network virtualization service provides network connectivity for other OpenStack services on multiple OpenStack systems and for VM instances. In Oracle Solaris, network virtualization services are provided through the Elastic Virtual Switch (EVS) capability, which acts as a single point of control for creating, configuring, and monitoring virtual switches that span multiple physical servers. Applications can drive their own behavior for prioritizing network traffic across the cloud. Neutron provides an API for users to dynamically request and configure virtual networks. These networks connect interfaces such as VNICs from Nova VM instances.

Cinder

The Cinder block storage service provides an infrastructure for managing block storage volumes in OpenStack. Cinder enables you to expose block devices and connect block devices to VM instances for expanded storage, better performance, and integration with enterprise storage platforms. In Oracle Solaris, Cinder uses ZFS for storage and uses iSCSI or Fibre Channel for remote access. ZFS provides integrated data services including snapshots, encryption, and deduplication. 

Swift

The Swift object storage service provides redundant and scalable object storage services for OpenStack projects and users. Swift stores and retrieves arbitrary unstructured data using ZFS, and the data is then accessible via a RESTful API.

Glance

The Glance image store service stores disk images of virtual machines, which are used to deploy VM instances. In Oracle Solaris, Glance images are Unified Archives. The images can be stored in a variety of locations from simple file systems to object storage systems such as OpenStack Swift. Glance has a RESTful API that enables you to query image metadata as well as retrieve the image.
Unified Archives enable secure, compliant, fast, and scalable deployment. The same Unified Archive can be used to deploy either bare-metal or virtual systems. You can use Unified Archives with the Automated Installer (AI) to rapidly provision many systems.

Horizon

Horizon is the OpenStack dashboard where you can manage the cloud infrastructure and computing infrastructure in order to support multiple VM instances. The dashboard provides a web-based user interface to OpenStack services.

Keystone

The Keystone identity service provides authentication and authorization services between users, administrators, and OpenStack services.

The Heat orchestration service engine enables developers to automate the implementation of an OpenStack infrastructure. The engine is driven by templates that contain configuration information and post installation operations to deploy a customized configuration.

Each OpenStack service is represented by one or more Service Management Facility (SMF) services. SMF regulates OpenStack services, for example, by performing automatic service restart in case of failure or full service dependency checking for more precise and efficient startup.