从完整url获取列表名称

从完整url获取列表名称

问题描述:

我有几百个不同的随机URL进入,libs中的所有文档,没有来自不同的农场和不同的网站集和站点的任何其他参数,目标是将文件从SharePoint下载为二进制数组。从完整url获取列表名称

因此,例如,传入网址= http://a.b.c.d.e/f.g/h.i/j/k/l/m.docx

那么如何获得(一)正确的网站集根URL(二)网站的根网址(三)库的根网址从这?我现在想到的唯一方法是慢慢剥离网址的每个部分,直到.Rootfolder不再给出例​​外...或者反过来,通过url的第一部分慢慢添加位,直到rootfolder nog longers给出例外,然后查询子网等。

的一点是,ClientContext constructor接受网页/网站的URL 但是,如果URL将以下列格式指定:

http://site/web/documents/file.docx 

然后将发生异常System.Net.WebException

下面的例子演示了如何从请求URL解析ClientContext

public static class ClientContextUtilities 
{ 
    /// <summary> 
    /// Resolve client context 
    /// </summary> 
    /// <param name="requestUri"></param> 
    /// <param name="context"></param> 
    /// <param name="credentials"></param> 
    /// <returns></returns> 
    public static bool TryResolveClientContext(Uri requestUri, out ClientContext context, ICredentials credentials) 
    { 
     context = null; 
     var baseUrl = requestUri.GetLeftPart(UriPartial.Authority); 
     for (int i = requestUri.Segments.Length; i >= 0; i--) 
     { 
      var path = string.Join(string.Empty, requestUri.Segments.Take(i)); 
      string url = string.Format("{0}{1}", baseUrl, path); 
      try 
      { 
       context = new ClientContext(url); 
       if (credentials != null) 
        context.Credentials = credentials; 
       context.ExecuteQuery(); 
       return true; 
      } 
      catch (Exception ex) {} 
     } 
     return false; 
    } 

} 

使用

ClientContext context; 
if (ClientContextUtilities.TryResolveClientContext(requestUri, out context, null)) 
{ 
    using (context) 
    { 
     var baseUrl = requestUri.GetLeftPart(UriPartial.Authority); 
     var fileServerRelativeUrl = requestUri.ToString().Replace(baseUrl, string.Empty);     
     var file = context.Web.GetFileByServerRelativeUrl(fileServerRelativeUrl); 
     context.Load(file); 
     context.Load(context.Web); 
     context.Load(context.Site); 
     context.ExecuteQuery(); 
    } 
} 

因为你的目标是要下载的文件,则是非常简单的方式在不解析url部分的情况下完成它。

例如,使用WebClient.DownloadFile Method://服务器/否/否/这里:

private static void DownloadFile(Uri fileUri, ICredentials credentials, string localFileName) 
{ 
    using(var client = new WebClient()) 
    { 
     client.Credentials = credentials; 
     client.DownloadFile(fileUri, localFileName); 
    } 
} 
+0

如果将根网站集URI是_http工作的?下载更好!但在允许下载之前需要先检查一些列。 – edelwater 2015-04-03 14:11:58

+0

它应该,至少我从来没有经历过任何问题,不管它是否根网站收集或没有 – 2015-04-03 14:18:17

我已经制定了一个工作方法,但是看起来复杂的,所以任何改进的建议,欢迎只为“下载文件,如果特定列的一个具有价值‘是’:

public void getDocument(Document doc) 
    { 
     // get the filename 
     Uri uri = new Uri(doc.uri); 
     doc.filename = ""; 
     doc.filename = System.IO.Path.GetFileName(uri.LocalPath); 
     //string fullPathWithoutFileName = docUri.Replace(filename, ""); 
     // would also include ?a&b so:   
     string[] splitDocUri = doc.uri.Split('/'); 
     string fullPathWithoutFileName = ""; 
     for (int i = 0; i < splitDocUri.Length -1; i++) 
     { 
      fullPathWithoutFileName += (splitDocUri[i] + '/'); 
     } 

     // get via "_api/contextinfo" the context info 
     HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(fullPathWithoutFileName + "_api/contextinfo"); 
     req.Method = "POST"; 
     req.Accept = "application/json; odata=verbose"; 
     req.Credentials = new NetworkCredential(doc.username, doc.password, doc.domain); 
     req.Headers.Add("X-FORMS_BASED_AUTH_ACCEPTED","f"); 
     req.ContentLength = 0; 
     BypassCertificateError(); 
     HttpWebResponse rp = (HttpWebResponse)req.GetResponse(); 
     Stream postStream = rp.GetResponseStream(); 
     StreamReader postReader = new StreamReader(postStream); 
     string results = postReader.ReadToEnd(); 
     // Now parse out some values needs system.web.extensions 
     JavaScriptSerializer jss = new JavaScriptSerializer(); 
     var d = jss.Deserialize<dynamic>(results); 
     string formDigestValue = d["d"]["GetContextWebInformation"]["FormDigestValue"]; 
     // the full url to the website e.g. "http://server:7777/level1/level 2"  
     string webFullUrl = d["d"]["GetContextWebInformation"]["WebFullUrl"]; 
     // the full url to the site collection e.g.  "http://server:7777" 
     string siteFullUrl = d["d"]["GetContextWebInformation"]["SiteFullUrl"]; 

     // now we can create a context 
     ClientContext ctx = new ClientContext(webFullUrl); 
     ctx.ExecutingWebRequest += 
      new EventHandler<WebRequestEventArgs>(ctx_MixedAuthRequest); 
     BypassCertificateError(); 
     ctx.AuthenticationMode = ClientAuthenticationMode.Default; 
     ctx.Credentials = new NetworkCredential(doc.username, doc.password, doc.domain); 

     // Get the List 
     Microsoft.SharePoint.Client.File file = ctx.Web.GetFileByServerRelativeUrl(uri.AbsolutePath); 
     List list = file.ListItemAllFields.ParentList; 
     ctx.Load(list); 
     ctx.ExecuteQuery(); 

     // execute a CAML query against it 
     CamlQuery camlQuery = new CamlQuery(); 
     camlQuery.ViewXml = 
      "<View><Query><Where><Eq><FieldRef Name='FileLeafRef'/>" + 
      "<Value Type='Text'>" + doc.filename + "</Value></Eq></Where>" + 
      "<RowLimit>1</RowLimit></Query></View>"; 
     ListItemCollection listItems = list.GetItems(camlQuery); 
     ctx.Load(listItems); 
     try { 
      ctx.ExecuteQuery(); 
     } 
     catch 
     { 
      // e.g. : no access or the listname as incorrectly deduced 
      throw; 
     } 

     // and now retrieve the items needed 
     if (listItems.Count == 1) 
     { 
      ListItem item = listItems[0]; 
      // some more checking from testColumn to decide if to download yes/no 
      string testColumn; 
      if (item.IsPropertyAvailable("testColumn")) { 
       testColumn = (string)item["testColumn"]; 
      } 

      FileInformation fileInformation = 
       Microsoft.SharePoint.Client.File.OpenBinaryDirect(ctx, 
        (string)item["FileRef"]); 
      doc.bytes = ReadFully(fileInformation.Stream); 

     } 
     else 
     { 
      doc.errormessage = "Error: No document found"; 
     } 

    }