forked from vmware-archive/docs-hd-staging
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathAdministeringPHDUsingtheCLI.html
1821 lines (1646 loc) · 148 KB
/
AdministeringPHDUsingtheCLI.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<!-- Always force latest IE rendering engine or request Chrome Frame -->
<meta content="IE=edge,chrome=1" http-equiv="X-UA-Compatible">
<!-- REPLACE X WITH PRODUCT NAME -->
<title>Administering PHD Using the CLI | Pivotal Docs</title>
<!-- Local CSS stylesheets -->
<link href="/stylesheets/master.css" media="screen,print" rel="stylesheet" type="text/css" />
<link href="/stylesheets/breadcrumbs.css" media="screen,print" rel="stylesheet" type="text/css" />
<link href="/stylesheets/search.css" media="screen,print" rel="stylesheet" type="text/css" />
<link href="/stylesheets/portal-style.css" media="screen,print" rel="stylesheet" type="text/css" />
<link href="/stylesheets/printable.css" media="print" rel="stylesheet" type="text/css" />
<!-- Confluence HTML stylesheet -->
<link href="/stylesheets/site-conf.css" media="screen,print" rel="stylesheet" type="text/css" />
<!-- Left-navigation code -->
<!-- http://www.designchemical.com/lab/jquery-vertical-accordion-menu-plugin/examples/# -->
<link href="/stylesheets/dcaccordion.css" rel="stylesheet" type="text/css" />
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" type="text/javascript"></script>
<script src="/javascripts/jquery.cookie.js" type="text/javascript"></script>
<script src="/javascripts/jquery.hoverIntent.minified.js" type="text/javascript"></script>
<script src="/javascripts/jquery.dcjqaccordion.2.7.min.js" type="text/javascript"></script>
<script type="text/javascript">
$(document).ready(function($){
$('#accordion-1').dcAccordion({
eventType: 'click',
autoClose: true,
saveState: true,
disableLink: false,
speed: 'fast',
classActive: 'test',
showCount: false
});
});
</script>
<link href="/stylesheets/grey.css" rel="stylesheet" type="text/css" />
<!-- End left-navigation code -->
<script src="/javascripts/all.js" type="text/javascript"></script>
<link href='http://www.gopivotal.com/misc/favicon.ico' rel='shortcut icon'>
<script type="text/javascript">
if (window.location.host === 'docs.gopivotal.com') {
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-39702075-1']);
_gaq.push(['_setDomainName', 'gopivotal.com']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
}
</script>
</head>
<body class="pivotalcf pivotalcf_getstarted pivotalcf_getstarted_index">
<div class="viewport">
<div class="mobile-navigation--wrapper mobile-only">
<div class="navigation-drawer--container">
<div class="navigation-item-list">
<div class="navbar-link active">
<a href="http://gopivotal.com">
Home
<i class="icon-chevron-right pull-right"></i>
</a>
</div>
<div class="navbar-link">
<a href="http://gopivotal.com/paas">
PaaS
<i class="icon-chevron-right pull-right"></i>
</a>
</div>
<div class="navbar-link">
<a href="http://gopivotal.com/big-data">
Big Data
<i class="icon-chevron-right pull-right"></i>
</a>
</div>
<div class="navbar-link">
<a href="http://gopivotal.com/agile">
Agile
<i class="icon-chevron-right pull-right"></i>
</a>
</div>
<div class="navbar-link">
<a href="http://gopivotal.com/support">
Help & Support
<i class="icon-chevron-right pull-right"></i>
</a>
</div>
<div class="navbar-link">
<a href="http://gopivotal.com/products">
Products
<i class="icon-chevron-right pull-right"></i>
</a>
</div>
<div class="navbar-link">
<a href="http://gopivotal.com/solutions">
Solutions
<i class="icon-chevron-right pull-right"></i>
</a>
</div>
<div class="navbar-link">
<a href="http://gopivotal.com/partners">
Partners
<i class="icon-chevron-right pull-right"></i>
</a>
</div>
</div>
</div>
<div class="mobile-nav">
<div class="nav-icon js-open-nav-drawer">
<i class="icon-reorder"></i>
</div>
<div class="header-center-icon">
<a href="http://gopivotal.com">
<div class="icon icon-pivotal-logo-mobile"></div>
</a>
</div>
</div>
</div>
<div class='wrap'>
<script src="//use.typekit.net/clb0qji.js" type="text/javascript"></script>
<script type="text/javascript">
try {
Typekit.load();
} catch (e) {
}
</script>
<script type="text/javascript">
document.domain = "gopivotal.com";
</script>
<script type="text/javascript">
WebFontConfig = {
google: { families: [ 'Source+Sans+Pro:300italic,400italic,600italic,300,400,600:latin' ] }
};
(function() {
var wf = document.createElement('script');
wf.src = ('https:' == document.location.protocol ? 'https' : 'http') +
'://ajax.googleapis.com/ajax/libs/webfont/1/webfont.js';
wf.type = 'text/javascript';
wf.async = 'true';
var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(wf, s);
})(); </script>
<div id="search-dropdown-box">
<div class="search-dropdown--container js-search-dropdown">
<div class="container-fluid">
<div class="close-menu-large"><img src="http://www.gopivotal.com/sites/all/themes/gopo13/images/icon-close.png" /></div>
<div class="search-form--container">
<div class="form-search">
<div class='gcse-search'></div>
<script src="http://www.google.com/jsapi" type="text/javascript"></script>
<script src="/javascripts/cse.js" type="text/javascript"></script>
</div>
</div>
</div>
</div>
</div>
<header class="navbar desktop-only" id="nav">
<div class="navbar-inner">
<div class="container-fluid">
<div class="pivotal-logo--container">
<a class="pivotal-logo" href="http://gopivotal.com"><span></span></a>
</div>
<ul class="nav pull-right">
<li class="navbar-link">
<a href="http://www.gopivotal.com/paas" id="paas-nav-link">PaaS</a>
</li>
<li class="navbar-link">
<a href="http://www.gopivotal.com/big-data" id="big-data-nav-link">BIG DATA</a>
</li>
<li class="navbar-link">
<a href="http://www.gopivotal.com/agile" id="agile-nav-link">AGILE</a>
</li>
<li class="navbar-link">
<a href="http://www.gopivotal.com/oss" id="oss-nav-link">OSS</a>
</li>
<li class="nav-search">
<a class="js-search-input-open" id="click-to-search"><span></span></a>
</li>
</ul>
</div>
<a href="http://www.gopivotal.com/contact">
<img id="get-started" src="http://www.gopivotal.com/sites/all/themes/gopo13/images/get-started.png">
</a>
</div>
</header>
<div class="main-wrap">
<div class="container-fluid">
<!-- Google CSE Search Box -->
<div id='docs-search'>
<gcse:search></gcse:search>
</div>
<div id='all-docs-link'>
<a href="http://docs.gopivotal.com/">All Documentation</a>
</div>
<div class="container">
<div id="sub-nav" class="nav-container">
<!-- Collapsible left-navigation-->
<ul class="accordion" id="accordion-1">
<!-- REPLACE <li/> NODES-->
<li>
<a href="index.html">Home</a></br>
<li>
<a href="PivotalHD.html">Pivotal HD 2.0.1</a>
<ul>
<li>
<a href="PHDEnterprise2.0.1ReleaseNotes.html">PHD Enterprise 2.0.1 Release Notes</a>
</li>
</ul>
<ul>
<li>
<a href="PHDInstallationandAdministration.html">PHD Installation and Administration</a>
<ul>
<li>
<a href="OverviewofPHD.html">Overview of PHD</a>
</li>
</ul>
<ul>
<li>
<a href="InstallationOverview.html">Installation Overview</a>
</li>
</ul>
<ul>
<li>
<a href="PHDInstallationChecklist.html">PHD Installation Checklist</a>
</li>
</ul>
<ul>
<li>
<a href="InstallingPHDUsingtheCLI.html">Installing PHD Using the CLI</a>
</li>
</ul>
<ul>
<li>
<a href="UpgradeChecklist.html">Upgrade Checklist</a>
</li>
</ul>
<ul>
<li>
<a href="UpgradingPHDUsingtheCLI.html">Upgrading PHD Using the CLI</a>
</li>
</ul>
<ul>
<li>
<a href="AdministeringPHDUsingtheCLI.html">Administering PHD Using the CLI</a>
</li>
</ul>
<ul>
<li>
<a href="PHDFAQFrequentlyAskedQuestions.html">PHD FAQ (Frequently Asked Questions)</a>
</li>
</ul>
<ul>
<li>
<a href="PHDTroubleshooting.html">PHD Troubleshooting</a>
</li>
</ul>
</li>
</ul>
<ul>
<li>
<a href="StackandToolsReference.html">Stack and Tools Reference</a>
<ul>
<li>
<a href="OverviewofApacheStackandPivotalComponents.html">Overview of Apache Stack and Pivotal Components</a>
</li>
</ul>
<ul>
<li>
<a href="ManuallyInstallingPivotalHD2.0Stack.html">Manually Installing Pivotal HD 2.0 Stack</a>
</li>
</ul>
<ul>
<li>
<a href="ManuallyUpgradingPivotalHDStackfrom1.1.1to2.0.html">Manually Upgrading Pivotal HD Stack from 1.1.1 to 2.0</a>
</li>
</ul>
<ul>
<li>
<a href="PivotalHadoopEnhancements.html">Pivotal Hadoop Enhancements</a>
</li>
</ul>
<ul>
<li>
<a href="Security.html">Security</a>
</li>
</ul>
</li>
</ul>
</li>
<li>
<a href="PivotalCommandCenter.html">Pivotal Command Center 2.2.1</a>
<ul>
<li>
<a href="PCC2.2.1ReleaseNotes.html">PCC 2.2.1 Release Notes</a>
</li>
</ul>
<ul>
<li>
<a href="PCCUserGuide.html">PCC User Guide</a>
<ul>
<li>
<a href="PCCOverview.html">PCC Overview</a>
</li>
</ul>
<ul>
<li>
<a href="PCCInstallationChecklist.html">PCC Installation Checklist</a>
</li>
</ul>
<ul>
<li>
<a href="InstallingPCC.html">Installing PCC</a>
</li>
</ul>
<ul>
<li>
<a href="UsingPCC.html">Using PCC</a>
</li>
</ul>
<ul>
<li>
<a href="CreatingaYUMEPELRepository.html">Creating a YUM EPEL Repository</a>
</li>
</ul>
<ul>
<li>
<a href="CommandLineReference.html">Command Line Reference</a>
</li>
</ul>
</li>
</ul>
</li>
<li>
<a href="PivotalHAWQ.html">Pivotal HAWQ 1.2.0</a>
<ul>
<li>
<a href="HAWQ1.2.0.1ReleaseNotes.html">HAWQ 1.2.0.1 Release Notes</a>
</li>
</ul>
<ul>
<li>
<a href="HAWQInstallationandUpgrade.html">HAWQ Installation and Upgrade</a>
<ul>
<li>
<a href="PreparingtoInstallHAWQ.html">Preparing to Install HAWQ</a>
</li>
</ul>
<ul>
<li>
<a href="InstallingHAWQ.html">Installing HAWQ</a>
</li>
</ul>
<ul>
<li>
<a href="InstallingtheHAWQComponents.html">Installing the HAWQ Components</a>
</li>
</ul>
<ul>
<li>
<a href="UpgradingHAWQandComponents.html">Upgrading HAWQ and Components</a>
</li>
</ul>
<ul>
<li>
<a href="HAWQConfigurationParameterReference.html">HAWQ Configuration Parameter Reference</a>
</li>
</ul>
</li>
</ul>
<ul>
<li>
<a href="HAWQAdministration.html">HAWQ Administration</a>
<ul>
<li>
<a href="HAWQOverview.html">HAWQ Overview</a>
</li>
</ul>
<ul>
<li>
<a href="HAWQQueryProcessing.html">HAWQ Query Processing</a>
</li>
</ul>
<ul>
<li>
<a href="UsingHAWQtoQueryData.html">Using HAWQ to Query Data</a>
</li>
</ul>
<ul>
<li>
<a href="ConfiguringClientAuthentication.html">Configuring Client Authentication</a>
</li>
</ul>
<ul>
<li>
<a href="KerberosAuthentication.html">Kerberos Authentication</a>
</li>
</ul>
<ul>
<li>
<a href="ExpandingtheHAWQSystem.html">Expanding the HAWQ System</a>
</li>
</ul>
<ul>
<li>
<a href="HAWQInputFormatforMapReduce.html">HAWQ InputFormat for MapReduce</a>
</li>
</ul>
<ul>
<li>
<a href="HAWQFilespacesandHighAvailabilityEnabledHDFS.html">HAWQ Filespaces and High Availability Enabled HDFS</a>
</li>
</ul>
<ul>
<li>
<a href="SQLCommandReference.html">SQL Command Reference</a>
</li>
</ul>
<ul>
<li>
<a href="ManagementUtilityReference.html">Management Utility Reference</a>
</li>
</ul>
<ul>
<li>
<a href="ClientUtilityReference.html">Client Utility Reference</a>
</li>
</ul>
<ul>
<li>
<a href="HAWQServerConfigurationParameters.html">HAWQ Server Configuration Parameters</a>
</li>
</ul>
<ul>
<li>
<a href="HAWQEnvironmentVariables.html">HAWQ Environment Variables</a>
</li>
</ul>
<ul>
<li>
<a href="HAWQDataTypes.html">HAWQ Data Types</a>
</li>
</ul>
<ul>
<li>
<a href="SystemCatalogReference.html">System Catalog Reference</a>
</li>
</ul>
<ul>
<li>
<a href="hawq_toolkitReference.html">hawq_toolkit Reference</a>
</li>
</ul>
</li>
</ul>
<ul>
<li>
<a href="PivotalExtensionFrameworkPXF.html">Pivotal Extension Framework (PXF)</a>
<ul>
<li>
<a href="PXFInstallationandAdministration.html">PXF Installation and Administration</a>
</li>
</ul>
<ul>
<li>
<a href="PXFExternalTableandAPIReference.html">PXF External Table and API Reference</a>
</li>
</ul>
</div><!--end of sub-nav-->
<h3 class="title-container">Administering PHD Using the CLI</h3>
<div class="content">
<!-- Python script replaces main content -->
<div id ="main"><div style="visibility:hidden; height:2px;">Pivotal Product Documentation : Administering PHD Using the CLI</div><div class="wiki-content group" id="main-content">
<p>This section describes the administrative actions that can be performed via Pivotal Command Center's command line interface (CLI).</p><p><style type="text/css">/*<![CDATA[*/
div.rbtoc1400035784112 {padding: 0px;}
div.rbtoc1400035784112 ul {list-style: disc;margin-left: 0px;}
div.rbtoc1400035784112 li {margin-left: 0px;padding-left: 0px;}
/*]]>*/</style><div class="toc-macro rbtoc1400035784112">
<ul class="toc-indentation">
<li><a href="#AdministeringPHDUsingtheCLI-ManagingaCluster">Managing a Cluster</a>
<ul class="toc-indentation">
<li><a href="#AdministeringPHDUsingtheCLI-StartingaCluster">Starting a Cluster</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-StoppingaCluster">Stopping a Cluster</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-RestartingaCluster">Restarting a Cluster</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-ReconfiguringaCluster">Reconfiguring a Cluster</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-Add/RemoveServices">Add / Remove Services</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-AddHoststoCluster">Add Hosts to Cluster</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-RetrievingConfigurationaboutaDeployedCluster">Retrieving Configuration about a Deployed Cluster</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-ListingClusters">Listing Clusters</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-ExpandingaCluster">Expanding a Cluster</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-ShrinkingaCluster">Shrinking a Cluster</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-DecommissioningNodes">Decommissioning Nodes</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-HighAvailability">High Availability</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-Security">Security</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-UninstallingaCluster">Uninstalling a Cluster</a></li>
</ul>
</li>
<li><a href="#AdministeringPHDUsingtheCLI-ManagingHAWQ">Managing HAWQ</a>
<ul class="toc-indentation">
<li><a href="#AdministeringPHDUsingtheCLI-InitializingHAWQ">Initializing HAWQ</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-StartingHAWQ">Starting HAWQ</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-StoppingHAWQ">Stopping HAWQ</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-ModifyingHAWQUserConfiguration">Modifying HAWQ User Configuration</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-ExpandingHAWQ">Expanding HAWQ</a></li>
</ul>
</li>
<li><a href="#AdministeringPHDUsingtheCLI-ManagingRolesandHosts">Managing Roles and Hosts</a>
<ul class="toc-indentation">
<li><a href="#AdministeringPHDUsingtheCLI-ManagingLocally">Managing Locally</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-ManagingRemotely">Managing Remotely</a></li>
</ul>
</li>
<li><a href="#AdministeringPHDUsingtheCLI-PivotalHDServicesReference">Pivotal HD Services Reference</a>
<ul class="toc-indentation">
<li><a href="#AdministeringPHDUsingtheCLI-OverridingDirectoryPermissions">Overriding Directory Permissions</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-PivotalHDUsersandGroups">Pivotal HD Users and Groups</a></li>
<li><a href="#AdministeringPHDUsingtheCLI-PivotalHDPorts">Pivotal HD Ports</a></li>
</ul>
</li>
</ul>
</div></p><p><span class="confluence-anchor-link" id="AdministeringPHDUsingtheCLI-ManagingACluster"></span></p><h2 id="AdministeringPHDUsingtheCLI-ManagingaCluster">Managing a Cluster</h2><h3 id="AdministeringPHDUsingtheCLI-StartingaCluster">Starting a Cluster</h3><p>You can use the <code>start</code> command to start all the configured services of the cluster, to start individual services configured for the cluster, and to start individual roles on a specific set of hosts.</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">icm_client start --help
Usage: /usr/bin/icm_client start [options]
Options:
-h, --help show this help message and exit
-v, --verbose increase output verbosity
-l CLUSTERNAME, --clustername=CLUSTERNAME
the name of the cluster on which the operation is
performed
-s SERVICES, --service=SERVICES
service to be started
-f, --force forcibly start cluster (even if install is incomplete)
-r ROLES, --role=ROLES
The name of the role which needs to be started
-o HOSTFILE, --hostfile=HOSTFILE
The absolute path for the file containing host names
for the role which needs to be started
</pre>
</div></div><p>The following table describes the list of values for the HDFS, MapRed, ZooKeeper, HBase, and HAWQ services:</p><div class="table-wrap"><table class="confluenceTable"><tbody><tr><th class="confluenceTh"><p>Option</p></th><th class="confluenceTh"><p>Description</p></th></tr><tr><td class="confluenceTd"><p><code>start</code></p></td><td class="confluenceTd"><p>Starts all configured cluster services in the right topological order based on service dependencies.</p></td></tr><tr><td class="confluenceTd"><p><code>-s</code></p></td><td class="confluenceTd"><p>Starts the specified service and all services it depends on in the right topological order. The supported services are HDFS, Yarn, Zookeeper, Hbase, Hive, HAWQ, Pig, and Mahout.</p></td></tr><tr><td class="confluenceTd"><p><code>-r</code></p></td><td class="confluenceTd"><p>Starts only the specified role on a specific set of hosts. Hosts can be specified using the -o option.</p></td></tr><tr><td class="confluenceTd"><p><code>-f</code></p></td><td class="confluenceTd"><p>Forces the cluster to start even if the installation is incomplete.</p></td></tr></tbody></table></div><p>The first time the cluster is started, Pivotal HD implicitly initializes the cluster. For subsequent invocations of the <code>start</code> command, the cluster is not initialized.</p><p> </p><p>Cluster initialization includes the following:</p><ul><li>Namenode format</li><li>Create directories on the local filesystem of cluster nodes and on the hdfs, with the correct permission overrides. See the <a href="#AdministeringPHDUsingtheCLI-OverridingDirectoryPermissions">Overriding Directory Permissions</a> section.</li><li>Create HDFS directories for additional services, such as HBase, if these are included in the configured services.</li></ul> <div class="aui-message warning shadowed information-macro">
<p class="title">Notes</p>
<span class="aui-icon icon-warning">Icon</span>
<div class="message-content">
<p>Refer to the "Verifying the Cluster Nodes for Pivotal HD" section to make sure the cluster services are up and running.</p><p>Make sure you back up all the data prior to installing or starting a new cluster on nodes that have pre-existing data on the configured mount points.</p>
</div>
</div>
<p>For example:<br/> Cluster level start:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">[gpadmin]# icm_client start -l CLUSTERNAME
</pre>
</div></div><p>Service level start:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">[gpadmin]# icm_client start -l CLUSTERNAME -s hdfs
</pre>
</div></div><p>Role level start:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">[gpadmin]# icm_client start -l CLUSTERNAME -r datanode -o hostfile
</pre>
</div></div><h3 id="AdministeringPHDUsingtheCLI-StoppingaCluster">Stopping a Cluster</h3><p>You can use the <code>stop</code> command to stop an entire cluster, to stop a single service, and to stop a single role on a specific set of hosts on which it is configured.</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">[gpadmin]# icm_client stop -h
Usage: icm_client stop [options]
Options:
-h, --help Show this help message and exit
-v, --verbose Increase output verbosity
-l CLUSTERNAME, --clustername=CLUSTERNAME
The name of the cluster on which the operation is
performed
-s SERVICES, --service=SERVICES
Service to be stopped
-r ROLES, --role=ROLES
The name of the role which needs to be stopped
-o HOSTFILE, --hostfile=HOSTFILE
The absolute path for the file containing host names
for the role that needs to be stopped
</pre>
</div></div><p>The following table describes the list of values for the HDFS, MapRed, ZooKeeper, HBase, and HAWQ services.</p><div class="table-wrap"><table class="confluenceTable"><tbody><tr><th class="confluenceTh"><p>Option</p></th><th class="confluenceTh"><p>Description</p></th></tr><tr><td class="confluenceTd"><p><code>stop</code></p></td><td class="confluenceTd"><p>Stops all configured cluster services in the right topological order, based on service dependencies.</p></td></tr><tr><td class="confluenceTd"><p><code>-s</code></p></td><td class="confluenceTd"><p>Stops the specified service and all the dependent services in the right topological order. The supported services are HDFS, Yarn, Zookeeper, HBase, Hive, HAWQ, Pig, and Mahout.</p></td></tr><tr><td class="confluenceTd"><p><code>-r</code></p></td><td class="confluenceTd"><p>Stops the specified role on a specific set of hosts. Hosts can be specified using the -o option.</p></td></tr></tbody></table></div><p>For example:<br/> Cluster level stop:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">[gpadmin]# icm_client stop -l CLUSTERNAME
</pre>
</div></div><p>Service level stop:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">[gpadmin]# icm_client stop -l CLUSTERNAME -s hdfs
</pre>
</div></div><p>Role level stop:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">[gpadmin]# icm_client stop -l CLUSTERNAME -r datanode -o hostfile
</pre>
</div></div><h3 id="AdministeringPHDUsingtheCLI-RestartingaCluster">Restarting a Cluster</h3><p>You can use the <code>-restart</code> command to stop, then restart, a cluster.</p><p>See stopping and starting a cluster, above, for more details about the stop/start operations.</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">[gpadmin]# icm_client restart -h
Usage: /usr/bin/icm_client restart [options]
Options:
-h, --help Show this help message and exit
-v, --verbose Increase output verbosity
-l CLUSTERNAME, --clustername=CLUSTERNAME
The name of the cluster on which the operation is
performed
-s SERVICES, --service=SERVICES
The service to be restarted
-f, --force Forcibly start cluster (even if install is incomplete)
-r ROLES, --role=ROLES
The name of the role which needs to be started
-o HOSTFILE, --hostfile=HOSTFILE
The absolute path for the file containing host names
for the role which needs to be started
</pre>
</div></div><p><span class="confluence-anchor-link" id="AdministeringPHDUsingtheCLI-Reconfiguring"></span></p><h3 id="AdministeringPHDUsingtheCLI-ReconfiguringaCluster">Reconfiguring a Cluster</h3><p>Run the <code>reconfigure </code>command to update specific configurations for an existing cluster.</p> <div class="aui-message warning shadowed information-macro">
<p class="title">Caution</p>
<span class="aui-icon icon-warning">Icon</span>
<div class="message-content">
<p>Running the <code>reconfigure </code>command on a secure cluster will disable security.</p>
</div>
</div>
<p>Some cluster-specific configurations cannot be updated:</p> <div class="aui-message warning shadowed information-macro">
<p class="title">Important</p>
<span class="aui-icon icon-warning">Icon</span>
<div class="message-content">
<ul><li>Reconfiguring the topology of a cluster (host-to-role mapping) is not allowed. For example: changing the NameNode to a different node or adding new set of datanodes to a cluster</li><li>Properties based on hostnames: For example, <code>fs.defaultFS</code>, <code>dfs.namenode</code>. and the <code>http-address</code>.</li><li>Properties with directory paths as values.</li></ul>
</div>
</div>
<p>The following table lists properties that can only be changed with a <code>--force</code> option.</p> <div class="aui-message warning shadowed information-macro">
<span class="aui-icon icon-warning">Icon</span>
<div class="message-content">
<ul><li>You are expected to take care of all the necessary prerequisites prior to making changes to any of the following properties, by using the force flag.</li><li>Incorrect provisioning can put the cluster into an inconsistent/unusable state.</li></ul>
</div>
</div>
<div class="table-wrap"><table class="confluenceTable"><tbody><tr><th class="confluenceTh"><p>Property Name</p></th><th class="confluenceTh"><p>Configuration File</p></th></tr><tr><td class="confluenceTd"><p><code>datanode.disk.mount.points</code></p></td><td class="confluenceTd"><p><code>clusterConfig.xml</code></p></td></tr><tr><td class="confluenceTd"><p><code>namenode.disk.mount.points</code></p></td><td class="confluenceTd"><p><code>clusterConfig.xml</code></p></td></tr><tr><td class="confluenceTd"><p><code>secondary.namenode.disk.mount.points</code></p></td><td class="confluenceTd"><p><code>clusterConfig.xml</code></p></td></tr><tr><td class="confluenceTd"><p><code>hawq.master.directory</code></p></td><td class="confluenceTd"><p><code>clusterConfig.xml</code></p></td></tr><tr><td class="confluenceTd"><p><code>hawq.segment.directory</code></p></td><td class="confluenceTd"><p><code>clusterConfig.xml</code></p></td></tr><tr><td class="confluenceTd" colspan="1"><code>zookeeper.data.dir</code></td><td class="confluenceTd" colspan="1"><code>clusterConfig.xml</code></td></tr></tbody></table></div><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">icm_client reconfigure -h
Usage: /usr/bin/icm_client reconfigure [options]
Options:
-h, --help show this help message and exit
-l CLUSTERNAME, --clustername=CLUSTERNAME
the name of the cluster on which the operation is
performed
-c CONFDIR, --confdir=CONFDIR
Directory path where cluster configuration is stored
-s, --noscanhosts Do not verify cluster nodes.
-p, --nopreparehosts Do not preparehosts as part of deploying the cluster.
-j JDKPATH, --java=JDKPATH
Location of Sun Java JDK RPM (Ex: jdk-
7u15-linux-x64.rpm). Ignored if -p is specified
-t, --ntp Synchronize system clocks using NTP. Optionally takes
NTP server as argument. Defaults to pool.ntp.org
(requires external network access). Ignored if -p is
specified
-d, --selinuxoff Disable SELinux. Ignored if -p is specified
-i, --iptablesoff Disable iptables. Ignored if -p is specified
-y SYSCONFIGDIR, --sysconf=SYSCONFIGDIR
[Only if HAWQ is part of the deploy] Directory
location of the custom conf files (sysctl.conf and
limits.conf) which will be appended to
/etc/sysctl.conf and /etc/limits.conf on slave nodes.
Default: /usr/lib/gphd/gphdmgr/hawq_sys_config/.
Ignored if -p is specified
-f, --force Forcibly reconfigure the cluster (allows changes to
any servicesConfigGlobals property)</pre>
</div></div><p><strong>To reconfigure an existing cluster:</strong></p><ol><li>Stop the cluster:<br/> <code> icm_client stop -l CLUSTERNAME</code></li><li>Fetch the configurations for the cluster into a local directory:<br/> <code>icm_client fetch-configuration -l CLUSTERNAME -o LOCALDIR</code></li><li>Edit the configuration files in the cluster configuration directory (<code>LOCALDIR</code>).</li><li>Reconfigure the cluster:<br/> <code>icm_client reconfigure -l CLUSTERNAME -c LOCALDIR</code></li></ol><p>Following an upgrade or reconfiguration, you need to synchronize the configuration files, as follows:<strong> <br/> </strong></p><ol><li>Fetch the new templates that come with the upgraded software by running <code>icm_client fetch-template</code>.</li><li>Retrieve the existing configuration from the database using <code>icm_client fetch-configuration</code>.</li><li>Synchronize the new configurations (<code>hdfs/hadoop-env</code>) from the template directory to the existing cluster configuration directory.</li><li>Upgrade or reconfigure service by specifying the cluster configuration directory with updated contents.</li></ol><h3 id="AdministeringPHDUsingtheCLI-Add/RemoveServices">Add / Remove Services</h3><p>Services can be added / removed using the <code>icm_client reconfigure</code> command.</p><ul><li>Edit the <code>clusterConfig.xml</code> file to add or remove services from the service list in the <code>services</code> tag.</li><li>Edit the <code>hostRoleMapping</code> section to add or remove hosts for the specific services configured.</li><li>Edit the <code>servicesConfigGlobals</code> if required for the specific service added.</li><li>Follow the steps for <a href="#AdministeringPHDUsingtheCLI-ReconfiguringaCluster">Reconfiguring a Cluster</a>.</li><li>In a new deployment, you can use the <code>-p</code> or <code>-s</code> option to disable scanhosts or preparehosts on the newly added hosts.</li><li>If you want to prepare the new hosts with Java, or if you want to disable iptables or SELinux, follow the instructions for installing Java mentioned in the Deploying a Cluster section of this document.</li></ul> <div class="aui-message warning shadowed information-macro">
<span class="aui-icon icon-warning">Icon</span>
<div class="message-content">
<p>Removing a specific service using the <code>icm_client reconfigure</code> command does not remove rpms from the nodes. The rpms are only removed when the Cluster is uninstalled</p>
</div>
</div>
<h3 id="AdministeringPHDUsingtheCLI-AddHoststoCluster">Add Hosts to Cluster</h3><p>If you plan to add hosts as part of adding a new service, perform the following:</p><ul><li>Prepare the new hosts using the <code>icm_client preparehosts</code> command.</li><li>Refer to the <em>Add / Remove Services</em> section.</li></ul><p>If you plan to add/remove hosts, as part of an existing service in the cluster, do the following:</p> <div class="aui-message warning shadowed information-macro">
<span class="aui-icon icon-warning">Icon</span>
<div class="message-content">
<p>You can only add or remove hosts for slave roles (refer to the <em>Expanding a Cluster</em> section for the list of slave roles). You cannot make host changes for any other role.</p>
</div>
</div>
<ul><li>Prepare the new hosts using the <code>icm_client preparehosts</code> command.</li><li>You can add the new hosts to the corresponding slave roles in the <code>hostRoleMapping</code> section in <code>clusterConfig.xml.<br/> </code></li><li>Follow the steps for <a href="#AdministeringPHDUsingtheCLI-ReconfiguringaCluster">Reconfiguring a Cluster</a>.</li></ul> <div class="aui-message warning shadowed information-macro">
<span class="aui-icon icon-warning">Icon</span>
<div class="message-content">
<p>You cannot add one service and remove another at the same time. You have to perform these as two separate steps; however, you can add multiple services OR remove multiple services at the same time.</p>
</div>
</div>
<p> </p><h3 id="AdministeringPHDUsingtheCLI-RetrievingConfigurationaboutaDeployedCluster">Retrieving Configuration about a Deployed Cluster</h3><p>Run the <code>fetch-configuration</code> command to fetch the configurations for an existing cluster and store them in a local file system directory.</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">icm_client fetch-configuration -h
Usage: icm_client fetch-configuration [options]
Options:
-h, --help show this help message and exit
-o OUTDIR, --outdir=OUTDIR
Directory path to store the cluster configuration
template files
-l CLUSTERNAME, --clustername=CLUSTERNAME
Name of the deployed cluster whose configurations need
to be fetched
</pre>
</div></div><p><strong>Sample Usage</strong></p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">icm_client fetch-configuration -l CLUSTERNAME -o LOCALDIR</pre>
</div></div><h3 id="AdministeringPHDUsingtheCLI-ListingClusters">Listing Clusters</h3><p>Run the <code>list </code>command to see a list of all the installed clusters:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">[gpadmin]# icm_client list --help
Usage: icm_client list [options]
Options:
-h, --help show this help message and exit
-v, --verbose increase output verbosity
</pre>
</div></div><p><strong>Sample Usage</strong>:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">icm_client list</pre>
</div></div><p><span class="confluence-anchor-link" id="AdministeringPHDUsingtheCLI-ExpandCluster"></span></p><h3 id="AdministeringPHDUsingtheCLI-ExpandingaCluster">Expanding a Cluster</h3> <div class="aui-message warning shadowed information-macro">
<p class="title">Notes</p>
<span class="aui-icon icon-warning">Icon</span>
<div class="message-content">
<ul><li>Make sure you run <code>preparehosts</code> against the new slave hosts prior to adding them to the cluster. (See the <code>preparehosts</code> command example in the "Preparing the Cluster for Pivotal HD" section.)</li><li>If security is enabled on the cluster; you will have to re-enable it after adding a node.</li></ul>
</div>
</div>
<p>Run the <code>add-slaves </code>command to add additional slave hosts to an existing cluster. All the slave roles for <strong> <em>existing </em> </strong>cluster services will be installed on the new cluster hosts.</p><p>The following table indicates the services and their corresponding slave roles. Services not included in this list are not allowed for expansion (or shrinking).</p><div class="table-wrap"><table class="confluenceTable"><tbody><tr><th class="confluenceTh"><p>Service Name</p></th><th class="confluenceTh"><p>Slave</p></th></tr><tr><td class="confluenceTd"><p><code>hdfs</code></p></td><td class="confluenceTd"><p><code>datanode</code></p></td></tr><tr><td class="confluenceTd"><p><code>yarn</code></p></td><td class="confluenceTd"><p><code>yarn-nodemanager<br/> </code></p></td></tr><tr><td class="confluenceTd"><p><code>hbase</code></p></td><td class="confluenceTd"><p><code>hbase-regionserver</code></p></td></tr><tr><td class="confluenceTd"><p><code>hawq</code></p></td><td class="confluenceTd"><p><code>hawq-segment</code></p></td></tr></tbody></table></div><p>If you only want to install an individual component on a node, you should do this by manually editing the <code>clusterConfig.xml</code> file, then running the <code>reconfigure</code> command (see <a href="#AdministeringPHDUsingtheCLI-ReconfiguringaCluster">Reconfiguring a Cluster</a>).</p><p> </p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">icm_client add-slaves --help
Usage: /usr/bin/icm_client add-slaves [options]
Options:
-h, --help show this help message and exit
-l CLUSTERNAME, --clustername=CLUSTERNAME
the name of the cluster on which the operation is
performed
-f HOSTFILE, --hostfile=HOSTFILE
file containing new-line separated list of hosts that
are going to be added.
-s, --noscanhosts Do not verify cluster nodes.
-j JAVAHOME, --java_home=JAVAHOME
JAVA_HOME path to verify on cluster nodes
-p, --nopreparehosts Do not preparehosts as part of deploying the cluster.
-k JDKPATH, --java=JDKPATH
Location of Sun Java JDK RPM (Ex: jdk-
7u15-linux-x64.rpm). Ignored if -p is specified
-t, --ntp Synchronize system clocks using NTP. Optionally takes
NTP server as argument. Defaults to pool.ntp.org
(requires external network access). Ignored if -p is
specified
-d, --selinuxoff Disable SELinux for the newly added nodes. Ignored if
-p is specified
-i, --iptablesoff Disable iptables for the newly added nodes. Ignored if
-p is specified
-y SYSCONFIGDIR, --sysconf=SYSCONFIGDIR
[Only if HAWQ is part of the deploy] Directory
location of the custom conf files (sysctl.conf and
limits.conf) which will be appended to
/etc/sysctl.conf and /etc/limits.conf of the newly
addded slave nodes. Default:
/usr/lib/gphd/gphdmgr/hawq_sys_config/. Ignored if -p
is specified</pre>
</div></div><p><strong>Sample Usage:</strong></p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">icm_client add-slaves -l CLUSTERNAME -f slave_hostfile</pre>
</div></div><p> </p><p>Make sure you start datanode and Yarn nodemanager on the newly added slave hosts.</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">icm_client start -l CLUSTERNAME -r datanode -o hostfile
icm_client start -l CLUSTERNAME -r yarn-nodemanager -o hostfile
</pre>
</div></div> <div class="aui-message warning shadowed information-macro">
<p class="title">Important</p>
<span class="aui-icon icon-warning">Icon</span>
<div class="message-content">
<ul><li>If HBase is configured, start hbase-regionservers as well.</li><li>Don't expect data blocks to be distributed to the newly added slave nodes immediately.</li></ul>
</div>
</div>
<div class="aui-message warning shadowed information-macro">
<span class="aui-icon icon-warning">Icon</span>
<div class="message-content">
<p>If HAWQ is configured, refer to the <em>Expanding HAWQ</em> section</p>
</div>
</div>
<div class="aui-message warning shadowed information-macro">
<span class="aui-icon icon-warning">Icon</span>
<div class="message-content">
<p>Hive does not have any slave roles, and therefore cannot be provisioned for an expansion.</p>
</div>
</div>
<p><span class="confluence-anchor-link" id="AdministeringPHDUsingtheCLI-ShrinkCluster"></span></p><h3 id="AdministeringPHDUsingtheCLI-ShrinkingaCluster">Shrinking a Cluster</h3> <div class="aui-message warning shadowed information-macro">
<span class="aui-icon icon-warning">Icon</span>
<div class="message-content">
<p>Make sure you decommission the slave hosts (refer to the next section) prior to removing them, to avoid potential data loss.</p>
</div>
</div>
<p> </p><p>Running the <code>remove-slaves</code> command lets the user remove slave hosts from an existing cluster. All the slave roles for the existing cluster services will be removed from the given hosts.</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">icm_client remove-slaves --help
Usage: /usr/bin/icm_client remove-slaves [options]
Options:
-h, --help show this help message and exit
-l CLUSTERNAME, --clustername=CLUSTERNAME
the name of the cluster on which the operation is
performed
-f HOSTFILE, --hostfile=HOSTFILE
file containing new-line separated list of hosts that
are going to be removed.
</pre>
</div></div><p><br class="atl-forced-newline"/> <strong>Sample Usage</strong>:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">icm_client remove-slaves -l CLUSTERNAME -f hostfile
</pre>
</div></div><h3 id="AdministeringPHDUsingtheCLI-DecommissioningNodes">Decommissioning Nodes</h3><p>Decommissioning is required to prevent potential loss of data blocks when you shutdown/remove slave hosts from a cluster. This is not an instant process, since it requires replication of a potentially large number of blocks to other cluster nodes.</p><p>The following are the manual steps to decommission slave hosts (datanodes,nodemanagers) from a cluster.</p><ul><li>On the NameNode host machine:<ul><li>Edit the <code> /etc/gphd/hadoop/conf/dfs.exclude </code> file and add the datanode hostnames to be removed (separated by a newline character). Make sure you use the FQDN for each hostname.</li><li><p>Execute the dfs refresh command:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">[gpadmin] sudo -u hdfs hdfs dfsadmin –refreshNodes
</pre>
</div></div></li></ul></li><li>On the Yarn Resource Manager host machine:<ul><li>Edit <code>/etc/gphd/hadoop/conf/yarn.exclude </code> file and add the node manager hostnames to be removed (separated by newline character). Make sure you use the FQDN for each hostname.</li><li><p>Execute the Yarn refresh command:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">[gpadmin] sudo -u hdfs yarn rmadmin -refreshNodes
</pre>
</div></div></li></ul></li><li>Check Decommission status:<ul><li>Monitor decommission progress with name-node Web UI <code> http://NAMENODE_FQDN:50070 </code> and navigate to the Decommissioning Nodes page.</li><li>Check whether the admin state has changed to Decommission In Progress for the DataNodes being decommissioned. When all the DataNodes report their state as Decommissioned, then all the blocks have been replicated.</li></ul></li><li>Shut down the decommissioned nodes:<ul><li>Stop datanode and Yarn node manager on the targeted slaves to be removed.</li></ul></li></ul><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">[gpadmin] icm_client stop -l CLUSTERNAME -r datanode -o hostfile
[gpadmin] icm_client stop -l CLUSTERNAME -r yarn-nodemanager -o hostfile
</pre>
</div></div> <div class="aui-message warning shadowed information-macro">
<span class="aui-icon icon-warning">Icon</span>
<div class="message-content">
<p>For HBase regionservers, you can proceed with shutting down the region servers on the slave hosts to be removed. It is preferable to use the <code>graceful_stop</code> script that hbase provides, if the load balancer is disabled.</p>
</div>
</div>
<p><span class="confluence-anchor-link" id="AdministeringPHDUsingtheCLI-EnablingHA"></span></p><h3 id="AdministeringPHDUsingtheCLI-HighAvailability">High Availability</h3><h4 id="AdministeringPHDUsingtheCLI-EnableHA"><span class="confluence-anchor-link" id="AdministeringPHDUsingtheCLI-EnableHA"></span></h4><h4 id="AdministeringPHDUsingtheCLI-EnablingHighAvailabilityonaCluster" style="text-align: left;">Enabling High Availability on a Cluster</h4><ul style="text-align: left;"><li>High availability is disabled by default.</li><li>Currently we only support Quorum Journal-based storage for high availability.</li><li>PCC 2.1 is the first version to support HA. If you are running an earlier version, download and import the latest version of Pivotal Command Center (PCC). (See <a href="InstallingPHDUsingtheCLI.html">Installing PHD Using the CLI</a> for details.)</li></ul> <div class="aui-message warning shadowed information-macro">
<span class="aui-icon icon-warning">Icon</span>
<div class="message-content">
<p>Before you enable HA for any cluster, make sure you take into consideration our recommended <a href="InstallingPHDUsingtheCLI.html#InstallingPHDUsingtheCLI-HABestPractices">HA Best Practices</a>.</p>
</div>
</div>
<p style="text-align: left;">To enable HA for a new cluster; follow the instructions provided in the <em>High Availability</em> section of <a href="InstallingPHDUsingtheCLI.html">Installing PHD Using the CLI</a>.</p><p style="text-align: left;">To enable HA for an existing cluster, see below.</p><ol style="text-align: left;"><li><p>Stop the cluster:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">icm_client stop -l CLUSTERNAME</pre>
</div></div></li><li><p>For HAWQ users, stop HAWQ.<br/>From the HAWQ master, as <code>gpadmin</code>, run the following:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">/etc/init.d/hawq stop</pre>
</div></div></li><li>Backup the Namenode data. Copy <code> {dfs.namenode.name.dir}/ </code> current to a backup directory.</li><li><p>Fetch the configurations for the cluster in a local directory:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">icm_client fetch-configuration -l CLUSTERNAME -o LOCALDIR</pre>
</div></div></li><li>Edit <code>clusterConfig.xml</code> as follows:<br/><p>Comment out <code>secondarynamenode</code> role in <code>hdfs</code> service.</p><p>Uncomment <code>standbynamenode</code> and <code>journalnode</code> roles in <code>hdfs</code> service.</p><p>Uncomment <code>nameservices</code>, <code>namenode1id</code>, <code>namenode2id</code>, <code>journalpath</code>, and <code>journalport</code> entries in <code>serviceConfigGlobals.<br/> <br/> </code></p></li><li><p>Edit <code>hdfs/hdfs-site.xml</code> as follows:<br/>Uncomment the following properties:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;"><property>
<name>dfs.nameservices</name>
<value>${nameservices}</value>
</property>
<property>
<name>dfs.ha.namenodes.${nameservices}</name>
<value>${namenode1id},${namenode2id}</value>
</property>
<property>
<name>dfs.namenode.rpc-address.${nameservices}.${namenode1id}</name>
<value>${namenode}:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.${nameservices}.${namenode2id}</name>
<value>${standbynamenode}:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.${nameservices}.${namenode1id}</name>
<value>${namenode}:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.${nameservices}.${namenode2id}</name>
<value>${standbynamenode}:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://${journalnode}/${nameservices}</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.${nameservices}</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hdfs/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>${journalpath}</value>
</property>
<!-- Namenode Auto HA related properties -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- END Namenode Auto HA related properties --></pre>
</div></div><p>Comment the following properties:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;"><property>
<name>dfs.namenode.secondary.http-address</name>
<value>${secondarynamenode}:50090</value>
<description>
The secondary namenode http server address and port.
</description>
</property></pre>
</div></div></li><li><p>Edit <code>yarn/yarn-site.xml</code> as follows:<code> <br/> </code>Set the following property/value:<code> <br/> </code></p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;"><property>
<name>mapreduce.job.hdfs-servers</name>
<value>hdfs://${nameservices}</value>
</property>
</pre>
</div></div></li><li><p>Edit <code>hdfs/core-site.xml</code> as follows:</p><p>Set the following property/value:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;"><property>
<name>fs.defaultFS</name>
<value>hdfs://${nameservices}</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property></pre>
</div></div><p>Then uncomment following property:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;"><property>
<name>ha.zookeeper.quorum</name>
<value>${zookeeper-server}:${zookeeper.client.port}</value>
</property></pre>
</div></div></li><li><p>Edit <code>hbase/hbase-site.xml</code> as follows:<br/>Set the following property/value:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;"><property>
<name>hbase.rootdir</name>
<value>hdfs://${nameservices}/apps/hbase/data</value>
<description>The directory shared by region servers and into
which HBase persists. The URL should be 'fully-qualified'
to include the filesystem scheme. For example, to specify the
HDFS directory '/hbase' where the HDFS instance's namenode is
running at namenode.example.org on port 9000, set this value to:
hdfs://namenode.example.org:9000/hbase. By default HBase writes
into /tmp. Change this configuration else all data will be lost
on machine restart.
</description>
</property></pre>
</div></div></li><li><p>To enable HA for HAWQ, comment out the default <code>DFS_URL</code> property and uncomment <code>DFS_URL</code> in <code>hawq/gpinitsystem_config</code> as follows:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;">#DFS_URL=${namenode}:${dfs.port}/hawq_data
#### For HA uncomment the following line
DFS_URL=${nameservices}/hawq_data</pre>
</div></div></li><li><p>Add the following properties to <code>hawq/hdfs-client.xml</code>:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
<pre class="theme: Confluence; brush: java; gutter: false" style="font-size:12px;"> <property>